Test Report: KVM_Linux_crio 16899

                    
                      f8194aff3a7b98ea29a2e4b2da65132feb1e4119:2023-07-17:30190
                    
                

Test fail (27/288)

Order failed test Duration
25 TestAddons/parallel/Ingress 152.84
36 TestAddons/StoppedEnableDisable 154.77
102 TestFunctional/parallel/License 0.1
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 170.51
200 TestMultiNode/serial/PingHostFrom2Pods 3.08
206 TestMultiNode/serial/RestartKeepsNodes 681.61
208 TestMultiNode/serial/StopMultiNode 143.11
215 TestPreload 276.59
221 TestRunningBinaryUpgrade 182.75
236 TestStoppedBinaryUpgrade/Upgrade 304.18
250 TestPause/serial/SecondStartNoReconfiguration 131.59
269 TestStartStop/group/old-k8s-version/serial/Stop 139.6
272 TestStartStop/group/embed-certs/serial/Stop 140.21
276 TestStartStop/group/no-preload/serial/Stop 139.59
278 TestStartStop/group/default-k8s-diff-port/serial/Stop 140.22
279 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
281 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
283 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
284 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
287 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.55
288 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.46
289 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.49
290 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.08
291 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 429.03
292 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 123.34
293 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 162.01
296 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 113.05
x
+
TestAddons/parallel/Ingress (152.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-436248 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-436248 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-436248 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c7f365e2-76da-44c0-8ea2-4faa3cb79d1c] Pending
helpers_test.go:344: "nginx" [c7f365e2-76da-44c0-8ea2-4faa3cb79d1c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c7f365e2-76da-44c0-8ea2-4faa3cb79d1c] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.025483445s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-436248 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-436248 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.2669933s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-436248 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-436248 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.220
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-436248 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-436248 addons disable ingress-dns --alsologtostderr -v=1: (1.512553293s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-436248 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-436248 addons disable ingress --alsologtostderr -v=1: (7.946167956s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-436248 -n addons-436248
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-436248 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-436248 logs -n 25: (1.252418876s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-896488 | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC |                     |
	|         | -p download-only-896488        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-896488 | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC |                     |
	|         | -p download-only-896488        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC | 17 Jul 23 21:40 UTC |
	| delete  | -p download-only-896488        | download-only-896488 | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC | 17 Jul 23 21:40 UTC |
	| delete  | -p download-only-896488        | download-only-896488 | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC | 17 Jul 23 21:40 UTC |
	| start   | --download-only -p             | binary-mirror-879454 | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC |                     |
	|         | binary-mirror-879454           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43523         |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-879454        | binary-mirror-879454 | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC | 17 Jul 23 21:40 UTC |
	| start   | -p addons-436248               | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC | 17 Jul 23 21:43 UTC |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	|         | --addons=helm-tiller           |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	|         | -p addons-436248               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	|         | addons-436248                  |                      |         |         |                     |                     |
	| addons  | addons-436248 addons           | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	|         | addons-436248                  |                      |         |         |                     |                     |
	| ip      | addons-436248 ip               | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	| addons  | addons-436248 addons disable   | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	|         | registry --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-436248 addons disable   | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC | 17 Jul 23 21:43 UTC |
	|         | helm-tiller --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| ssh     | addons-436248 ssh curl -s      | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:43 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                      |         |         |                     |                     |
	|         | nginx.example.com'             |                      |         |         |                     |                     |
	| addons  | addons-436248 addons           | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:44 UTC | 17 Jul 23 21:44 UTC |
	|         | disable csi-hostpath-driver    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-436248 addons           | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:44 UTC | 17 Jul 23 21:44 UTC |
	|         | disable volumesnapshots        |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ip      | addons-436248 ip               | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:45 UTC | 17 Jul 23 21:45 UTC |
	| addons  | addons-436248 addons disable   | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:45 UTC | 17 Jul 23 21:45 UTC |
	|         | ingress-dns --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-436248 addons disable   | addons-436248        | jenkins | v1.31.0 | 17 Jul 23 21:45 UTC | 17 Jul 23 21:45 UTC |
	|         | ingress --alsologtostderr -v=1 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:40:49
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:40:49.231377   23321 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:40:49.231636   23321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:40:49.231650   23321 out.go:309] Setting ErrFile to fd 2...
	I0717 21:40:49.231657   23321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:40:49.232137   23321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 21:40:49.233010   23321 out.go:303] Setting JSON to false
	I0717 21:40:49.233801   23321 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5001,"bootTime":1689625048,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:40:49.233857   23321 start.go:138] virtualization: kvm guest
	I0717 21:40:49.236248   23321 out.go:177] * [addons-436248] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:40:49.238154   23321 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 21:40:49.239525   23321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:40:49.238172   23321 notify.go:220] Checking for updates...
	I0717 21:40:49.242172   23321 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 21:40:49.243660   23321 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 21:40:49.245004   23321 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 21:40:49.246370   23321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:40:49.247674   23321 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:40:49.278720   23321 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 21:40:49.280063   23321 start.go:298] selected driver: kvm2
	I0717 21:40:49.280074   23321 start.go:880] validating driver "kvm2" against <nil>
	I0717 21:40:49.280083   23321 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:40:49.280728   23321 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:49.280795   23321 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 21:40:49.294343   23321 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 21:40:49.294379   23321 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:40:49.294567   23321 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 21:40:49.294594   23321 cni.go:84] Creating CNI manager for ""
	I0717 21:40:49.294606   23321 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 21:40:49.294616   23321 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 21:40:49.294623   23321 start_flags.go:319] config:
	{Name:addons-436248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-436248 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:40:49.294737   23321 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:49.296471   23321 out.go:177] * Starting control plane node addons-436248 in cluster addons-436248
	I0717 21:40:49.297715   23321 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:40:49.297744   23321 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 21:40:49.297766   23321 cache.go:57] Caching tarball of preloaded images
	I0717 21:40:49.297834   23321 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 21:40:49.297844   23321 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 21:40:49.298143   23321 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/config.json ...
	I0717 21:40:49.298162   23321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/config.json: {Name:mkf821305d6ce675faf6accf24f5f0073234ab89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:40:49.298279   23321 start.go:365] acquiring machines lock for addons-436248: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 21:40:49.298321   23321 start.go:369] acquired machines lock for "addons-436248" in 29.705µs
	I0717 21:40:49.298335   23321 start.go:93] Provisioning new machine with config: &{Name:addons-436248 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-436248
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 21:40:49.298407   23321 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 21:40:49.299985   23321 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 21:40:49.300084   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:40:49.300129   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:40:49.313332   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I0717 21:40:49.313769   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:40:49.314389   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:40:49.314415   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:40:49.314725   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:40:49.314889   23321 main.go:141] libmachine: (addons-436248) Calling .GetMachineName
	I0717 21:40:49.315030   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:40:49.315162   23321 start.go:159] libmachine.API.Create for "addons-436248" (driver="kvm2")
	I0717 21:40:49.315185   23321 client.go:168] LocalClient.Create starting
	I0717 21:40:49.315218   23321 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem
	I0717 21:40:49.426642   23321 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem
	I0717 21:40:49.485151   23321 main.go:141] libmachine: Running pre-create checks...
	I0717 21:40:49.485177   23321 main.go:141] libmachine: (addons-436248) Calling .PreCreateCheck
	I0717 21:40:49.485673   23321 main.go:141] libmachine: (addons-436248) Calling .GetConfigRaw
	I0717 21:40:49.486090   23321 main.go:141] libmachine: Creating machine...
	I0717 21:40:49.486105   23321 main.go:141] libmachine: (addons-436248) Calling .Create
	I0717 21:40:49.486244   23321 main.go:141] libmachine: (addons-436248) Creating KVM machine...
	I0717 21:40:49.487545   23321 main.go:141] libmachine: (addons-436248) DBG | found existing default KVM network
	I0717 21:40:49.488234   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:49.488109   23344 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f130}
	I0717 21:40:49.493606   23321 main.go:141] libmachine: (addons-436248) DBG | trying to create private KVM network mk-addons-436248 192.168.39.0/24...
	I0717 21:40:49.562067   23321 main.go:141] libmachine: (addons-436248) Setting up store path in /home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248 ...
	I0717 21:40:49.562108   23321 main.go:141] libmachine: (addons-436248) Building disk image from file:///home/jenkins/minikube-integration/16899-15759/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 21:40:49.562118   23321 main.go:141] libmachine: (addons-436248) DBG | private KVM network mk-addons-436248 192.168.39.0/24 created
	I0717 21:40:49.562131   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:49.562019   23344 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 21:40:49.562177   23321 main.go:141] libmachine: (addons-436248) Downloading /home/jenkins/minikube-integration/16899-15759/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16899-15759/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 21:40:49.761534   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:49.761389   23344 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa...
	I0717 21:40:49.901813   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:49.901696   23344 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/addons-436248.rawdisk...
	I0717 21:40:49.901857   23321 main.go:141] libmachine: (addons-436248) DBG | Writing magic tar header
	I0717 21:40:49.901868   23321 main.go:141] libmachine: (addons-436248) DBG | Writing SSH key tar header
	I0717 21:40:49.901878   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:49.901839   23344 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248 ...
	I0717 21:40:49.902033   23321 main.go:141] libmachine: (addons-436248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248
	I0717 21:40:49.902081   23321 main.go:141] libmachine: (addons-436248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube/machines
	I0717 21:40:49.902099   23321 main.go:141] libmachine: (addons-436248) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248 (perms=drwx------)
	I0717 21:40:49.902118   23321 main.go:141] libmachine: (addons-436248) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube/machines (perms=drwxr-xr-x)
	I0717 21:40:49.902136   23321 main.go:141] libmachine: (addons-436248) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube (perms=drwxr-xr-x)
	I0717 21:40:49.902144   23321 main.go:141] libmachine: (addons-436248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 21:40:49.902156   23321 main.go:141] libmachine: (addons-436248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759
	I0717 21:40:49.902164   23321 main.go:141] libmachine: (addons-436248) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 21:40:49.902176   23321 main.go:141] libmachine: (addons-436248) DBG | Checking permissions on dir: /home/jenkins
	I0717 21:40:49.902188   23321 main.go:141] libmachine: (addons-436248) DBG | Checking permissions on dir: /home
	I0717 21:40:49.902205   23321 main.go:141] libmachine: (addons-436248) DBG | Skipping /home - not owner
	I0717 21:40:49.902225   23321 main.go:141] libmachine: (addons-436248) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759 (perms=drwxrwxr-x)
	I0717 21:40:49.902241   23321 main.go:141] libmachine: (addons-436248) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 21:40:49.902252   23321 main.go:141] libmachine: (addons-436248) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 21:40:49.902263   23321 main.go:141] libmachine: (addons-436248) Creating domain...
	I0717 21:40:49.903216   23321 main.go:141] libmachine: (addons-436248) define libvirt domain using xml: 
	I0717 21:40:49.903240   23321 main.go:141] libmachine: (addons-436248) <domain type='kvm'>
	I0717 21:40:49.903253   23321 main.go:141] libmachine: (addons-436248)   <name>addons-436248</name>
	I0717 21:40:49.903265   23321 main.go:141] libmachine: (addons-436248)   <memory unit='MiB'>4000</memory>
	I0717 21:40:49.903275   23321 main.go:141] libmachine: (addons-436248)   <vcpu>2</vcpu>
	I0717 21:40:49.903284   23321 main.go:141] libmachine: (addons-436248)   <features>
	I0717 21:40:49.903301   23321 main.go:141] libmachine: (addons-436248)     <acpi/>
	I0717 21:40:49.903320   23321 main.go:141] libmachine: (addons-436248)     <apic/>
	I0717 21:40:49.903333   23321 main.go:141] libmachine: (addons-436248)     <pae/>
	I0717 21:40:49.903343   23321 main.go:141] libmachine: (addons-436248)     
	I0717 21:40:49.903357   23321 main.go:141] libmachine: (addons-436248)   </features>
	I0717 21:40:49.903374   23321 main.go:141] libmachine: (addons-436248)   <cpu mode='host-passthrough'>
	I0717 21:40:49.903385   23321 main.go:141] libmachine: (addons-436248)   
	I0717 21:40:49.903396   23321 main.go:141] libmachine: (addons-436248)   </cpu>
	I0717 21:40:49.903421   23321 main.go:141] libmachine: (addons-436248)   <os>
	I0717 21:40:49.903439   23321 main.go:141] libmachine: (addons-436248)     <type>hvm</type>
	I0717 21:40:49.903453   23321 main.go:141] libmachine: (addons-436248)     <boot dev='cdrom'/>
	I0717 21:40:49.903462   23321 main.go:141] libmachine: (addons-436248)     <boot dev='hd'/>
	I0717 21:40:49.903470   23321 main.go:141] libmachine: (addons-436248)     <bootmenu enable='no'/>
	I0717 21:40:49.903485   23321 main.go:141] libmachine: (addons-436248)   </os>
	I0717 21:40:49.903495   23321 main.go:141] libmachine: (addons-436248)   <devices>
	I0717 21:40:49.903504   23321 main.go:141] libmachine: (addons-436248)     <disk type='file' device='cdrom'>
	I0717 21:40:49.903516   23321 main.go:141] libmachine: (addons-436248)       <source file='/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/boot2docker.iso'/>
	I0717 21:40:49.903525   23321 main.go:141] libmachine: (addons-436248)       <target dev='hdc' bus='scsi'/>
	I0717 21:40:49.903534   23321 main.go:141] libmachine: (addons-436248)       <readonly/>
	I0717 21:40:49.903542   23321 main.go:141] libmachine: (addons-436248)     </disk>
	I0717 21:40:49.903552   23321 main.go:141] libmachine: (addons-436248)     <disk type='file' device='disk'>
	I0717 21:40:49.903563   23321 main.go:141] libmachine: (addons-436248)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 21:40:49.903574   23321 main.go:141] libmachine: (addons-436248)       <source file='/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/addons-436248.rawdisk'/>
	I0717 21:40:49.903586   23321 main.go:141] libmachine: (addons-436248)       <target dev='hda' bus='virtio'/>
	I0717 21:40:49.903596   23321 main.go:141] libmachine: (addons-436248)     </disk>
	I0717 21:40:49.903606   23321 main.go:141] libmachine: (addons-436248)     <interface type='network'>
	I0717 21:40:49.903615   23321 main.go:141] libmachine: (addons-436248)       <source network='mk-addons-436248'/>
	I0717 21:40:49.903624   23321 main.go:141] libmachine: (addons-436248)       <model type='virtio'/>
	I0717 21:40:49.903632   23321 main.go:141] libmachine: (addons-436248)     </interface>
	I0717 21:40:49.903639   23321 main.go:141] libmachine: (addons-436248)     <interface type='network'>
	I0717 21:40:49.903649   23321 main.go:141] libmachine: (addons-436248)       <source network='default'/>
	I0717 21:40:49.903655   23321 main.go:141] libmachine: (addons-436248)       <model type='virtio'/>
	I0717 21:40:49.903663   23321 main.go:141] libmachine: (addons-436248)     </interface>
	I0717 21:40:49.903671   23321 main.go:141] libmachine: (addons-436248)     <serial type='pty'>
	I0717 21:40:49.903681   23321 main.go:141] libmachine: (addons-436248)       <target port='0'/>
	I0717 21:40:49.903689   23321 main.go:141] libmachine: (addons-436248)     </serial>
	I0717 21:40:49.903698   23321 main.go:141] libmachine: (addons-436248)     <console type='pty'>
	I0717 21:40:49.903711   23321 main.go:141] libmachine: (addons-436248)       <target type='serial' port='0'/>
	I0717 21:40:49.903724   23321 main.go:141] libmachine: (addons-436248)     </console>
	I0717 21:40:49.903735   23321 main.go:141] libmachine: (addons-436248)     <rng model='virtio'>
	I0717 21:40:49.903745   23321 main.go:141] libmachine: (addons-436248)       <backend model='random'>/dev/random</backend>
	I0717 21:40:49.903751   23321 main.go:141] libmachine: (addons-436248)     </rng>
	I0717 21:40:49.903759   23321 main.go:141] libmachine: (addons-436248)     
	I0717 21:40:49.903769   23321 main.go:141] libmachine: (addons-436248)     
	I0717 21:40:49.903778   23321 main.go:141] libmachine: (addons-436248)   </devices>
	I0717 21:40:49.903785   23321 main.go:141] libmachine: (addons-436248) </domain>
	I0717 21:40:49.903793   23321 main.go:141] libmachine: (addons-436248) 
	I0717 21:40:49.909494   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:c2:7a:d3 in network default
	I0717 21:40:49.910021   23321 main.go:141] libmachine: (addons-436248) Ensuring networks are active...
	I0717 21:40:49.910055   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:40:49.910620   23321 main.go:141] libmachine: (addons-436248) Ensuring network default is active
	I0717 21:40:49.910983   23321 main.go:141] libmachine: (addons-436248) Ensuring network mk-addons-436248 is active
	I0717 21:40:49.911435   23321 main.go:141] libmachine: (addons-436248) Getting domain xml...
	I0717 21:40:49.912033   23321 main.go:141] libmachine: (addons-436248) Creating domain...
	I0717 21:40:50.466655   23321 main.go:141] libmachine: (addons-436248) Waiting to get IP...
	I0717 21:40:50.468009   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:40:50.469318   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:40:50.469360   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:50.469316   23344 retry.go:31] will retry after 250.772864ms: waiting for machine to come up
	I0717 21:40:50.722037   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:40:50.722407   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:40:50.722467   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:50.722356   23344 retry.go:31] will retry after 258.123861ms: waiting for machine to come up
	I0717 21:40:50.981855   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:40:50.982273   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:40:50.982301   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:50.982209   23344 retry.go:31] will retry after 450.935184ms: waiting for machine to come up
	I0717 21:40:51.434858   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:40:51.435324   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:40:51.435356   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:51.435251   23344 retry.go:31] will retry after 382.906787ms: waiting for machine to come up
	I0717 21:40:51.819557   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:40:51.820005   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:40:51.820031   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:51.819954   23344 retry.go:31] will retry after 703.261306ms: waiting for machine to come up
	I0717 21:40:52.524809   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:40:52.525285   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:40:52.525314   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:52.525266   23344 retry.go:31] will retry after 939.344242ms: waiting for machine to come up
	I0717 21:40:53.466289   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:40:53.466669   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:40:53.466696   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:53.466638   23344 retry.go:31] will retry after 957.709932ms: waiting for machine to come up
	I0717 21:40:54.426280   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:40:54.426773   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:40:54.426801   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:54.426693   23344 retry.go:31] will retry after 935.101539ms: waiting for machine to come up
	I0717 21:40:55.363726   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:40:55.364130   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:40:55.364159   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:55.364083   23344 retry.go:31] will retry after 1.62307932s: waiting for machine to come up
	I0717 21:40:56.989798   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:40:56.990194   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:40:56.990218   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:56.990167   23344 retry.go:31] will retry after 1.741224277s: waiting for machine to come up
	I0717 21:40:58.734162   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:40:58.734550   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:40:58.734579   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:40:58.734497   23344 retry.go:31] will retry after 2.428038908s: waiting for machine to come up
	I0717 21:41:01.165107   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:01.165470   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:41:01.165489   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:41:01.165441   23344 retry.go:31] will retry after 3.501882991s: waiting for machine to come up
	I0717 21:41:04.668689   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:04.669154   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:41:04.669186   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:41:04.669093   23344 retry.go:31] will retry after 4.413400618s: waiting for machine to come up
	I0717 21:41:09.087564   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:09.087899   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find current IP address of domain addons-436248 in network mk-addons-436248
	I0717 21:41:09.087921   23321 main.go:141] libmachine: (addons-436248) DBG | I0717 21:41:09.087870   23344 retry.go:31] will retry after 5.128214243s: waiting for machine to come up
	I0717 21:41:14.221872   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.222295   23321 main.go:141] libmachine: (addons-436248) Found IP for machine: 192.168.39.220
	I0717 21:41:14.222322   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has current primary IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.222329   23321 main.go:141] libmachine: (addons-436248) Reserving static IP address...
	I0717 21:41:14.222746   23321 main.go:141] libmachine: (addons-436248) DBG | unable to find host DHCP lease matching {name: "addons-436248", mac: "52:54:00:62:fd:1c", ip: "192.168.39.220"} in network mk-addons-436248
	I0717 21:41:14.297514   23321 main.go:141] libmachine: (addons-436248) DBG | Getting to WaitForSSH function...
	I0717 21:41:14.297603   23321 main.go:141] libmachine: (addons-436248) Reserved static IP address: 192.168.39.220
	I0717 21:41:14.297619   23321 main.go:141] libmachine: (addons-436248) Waiting for SSH to be available...
	I0717 21:41:14.300077   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.300452   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:14.300484   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.300593   23321 main.go:141] libmachine: (addons-436248) DBG | Using SSH client type: external
	I0717 21:41:14.300621   23321 main.go:141] libmachine: (addons-436248) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa (-rw-------)
	I0717 21:41:14.300655   23321 main.go:141] libmachine: (addons-436248) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 21:41:14.300683   23321 main.go:141] libmachine: (addons-436248) DBG | About to run SSH command:
	I0717 21:41:14.300698   23321 main.go:141] libmachine: (addons-436248) DBG | exit 0
	I0717 21:41:14.397397   23321 main.go:141] libmachine: (addons-436248) DBG | SSH cmd err, output: <nil>: 
	I0717 21:41:14.397720   23321 main.go:141] libmachine: (addons-436248) KVM machine creation complete!
	I0717 21:41:14.398076   23321 main.go:141] libmachine: (addons-436248) Calling .GetConfigRaw
	I0717 21:41:14.398618   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:14.398835   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:14.398986   23321 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 21:41:14.399006   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:14.400138   23321 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 21:41:14.400159   23321 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 21:41:14.400171   23321 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 21:41:14.400182   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:14.402402   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.403177   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:14.403223   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.403092   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:14.403532   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:14.403745   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:14.404051   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:14.404434   23321 main.go:141] libmachine: Using SSH client type: native
	I0717 21:41:14.404817   23321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0717 21:41:14.404831   23321 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 21:41:14.532734   23321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:41:14.532767   23321 main.go:141] libmachine: Detecting the provisioner...
	I0717 21:41:14.532777   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:14.535398   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.535707   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:14.535737   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.535893   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:14.536078   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:14.536216   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:14.536339   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:14.536491   23321 main.go:141] libmachine: Using SSH client type: native
	I0717 21:41:14.536882   23321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0717 21:41:14.536896   23321 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 21:41:14.666256   23321 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0717 21:41:14.666344   23321 main.go:141] libmachine: found compatible host: buildroot
	I0717 21:41:14.666358   23321 main.go:141] libmachine: Provisioning with buildroot...
	I0717 21:41:14.666369   23321 main.go:141] libmachine: (addons-436248) Calling .GetMachineName
	I0717 21:41:14.666651   23321 buildroot.go:166] provisioning hostname "addons-436248"
	I0717 21:41:14.666680   23321 main.go:141] libmachine: (addons-436248) Calling .GetMachineName
	I0717 21:41:14.666873   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:14.669351   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.669742   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:14.669776   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.669935   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:14.670119   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:14.670241   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:14.670364   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:14.670496   23321 main.go:141] libmachine: Using SSH client type: native
	I0717 21:41:14.670943   23321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0717 21:41:14.670960   23321 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-436248 && echo "addons-436248" | sudo tee /etc/hostname
	I0717 21:41:14.814058   23321 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-436248
	
	I0717 21:41:14.814089   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:14.816849   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.817195   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:14.817227   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.817353   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:14.817568   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:14.817724   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:14.817850   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:14.818005   23321 main.go:141] libmachine: Using SSH client type: native
	I0717 21:41:14.818400   23321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0717 21:41:14.818433   23321 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-436248' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-436248/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-436248' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:41:14.958669   23321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:41:14.958694   23321 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 21:41:14.958740   23321 buildroot.go:174] setting up certificates
	I0717 21:41:14.958755   23321 provision.go:83] configureAuth start
	I0717 21:41:14.958766   23321 main.go:141] libmachine: (addons-436248) Calling .GetMachineName
	I0717 21:41:14.959072   23321 main.go:141] libmachine: (addons-436248) Calling .GetIP
	I0717 21:41:14.961469   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.961781   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:14.961804   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.961991   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:14.964001   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.964382   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:14.964405   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:14.964501   23321 provision.go:138] copyHostCerts
	I0717 21:41:14.964560   23321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 21:41:14.964676   23321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 21:41:14.964747   23321 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 21:41:14.964799   23321 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.addons-436248 san=[192.168.39.220 192.168.39.220 localhost 127.0.0.1 minikube addons-436248]
	I0717 21:41:15.126026   23321 provision.go:172] copyRemoteCerts
	I0717 21:41:15.126086   23321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:41:15.126108   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:15.128822   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.129122   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:15.129152   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.129335   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:15.129510   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:15.129680   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:15.129782   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:15.223283   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 21:41:15.246372   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 21:41:15.268550   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 21:41:15.290627   23321 provision.go:86] duration metric: configureAuth took 331.857434ms
	I0717 21:41:15.290658   23321 buildroot.go:189] setting minikube options for container-runtime
	I0717 21:41:15.290865   23321 config.go:182] Loaded profile config "addons-436248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:41:15.290973   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:15.293304   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.293672   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:15.293707   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.293872   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:15.294061   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:15.294208   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:15.294334   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:15.294459   23321 main.go:141] libmachine: Using SSH client type: native
	I0717 21:41:15.294848   23321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0717 21:41:15.294870   23321 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 21:41:15.603655   23321 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 21:41:15.603692   23321 main.go:141] libmachine: Checking connection to Docker...
	I0717 21:41:15.603711   23321 main.go:141] libmachine: (addons-436248) Calling .GetURL
	I0717 21:41:15.604872   23321 main.go:141] libmachine: (addons-436248) DBG | Using libvirt version 6000000
	I0717 21:41:15.607005   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.607401   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:15.607426   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.607573   23321 main.go:141] libmachine: Docker is up and running!
	I0717 21:41:15.607591   23321 main.go:141] libmachine: Reticulating splines...
	I0717 21:41:15.607601   23321 client.go:171] LocalClient.Create took 26.292407444s
	I0717 21:41:15.607623   23321 start.go:167] duration metric: libmachine.API.Create for "addons-436248" took 26.292461382s
	I0717 21:41:15.607634   23321 start.go:300] post-start starting for "addons-436248" (driver="kvm2")
	I0717 21:41:15.607643   23321 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:41:15.607666   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:15.607870   23321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:41:15.607897   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:15.609825   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.610154   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:15.610182   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.610334   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:15.610523   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:15.610662   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:15.610775   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:15.702551   23321 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:41:15.706705   23321 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 21:41:15.706726   23321 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 21:41:15.706787   23321 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 21:41:15.706812   23321 start.go:303] post-start completed in 99.173077ms
	I0717 21:41:15.706841   23321 main.go:141] libmachine: (addons-436248) Calling .GetConfigRaw
	I0717 21:41:15.707340   23321 main.go:141] libmachine: (addons-436248) Calling .GetIP
	I0717 21:41:15.709800   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.710135   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:15.710163   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.710414   23321 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/config.json ...
	I0717 21:41:15.710575   23321 start.go:128] duration metric: createHost completed in 26.412160472s
	I0717 21:41:15.710595   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:15.712634   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.712941   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:15.712961   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.713102   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:15.713274   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:15.713402   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:15.713544   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:15.713694   23321 main.go:141] libmachine: Using SSH client type: native
	I0717 21:41:15.714067   23321 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0717 21:41:15.714078   23321 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 21:41:15.846341   23321 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689630075.819615213
	
	I0717 21:41:15.846365   23321 fix.go:206] guest clock: 1689630075.819615213
	I0717 21:41:15.846373   23321 fix.go:219] Guest: 2023-07-17 21:41:15.819615213 +0000 UTC Remote: 2023-07-17 21:41:15.710585508 +0000 UTC m=+26.510438172 (delta=109.029705ms)
	I0717 21:41:15.846391   23321 fix.go:190] guest clock delta is within tolerance: 109.029705ms
	I0717 21:41:15.846396   23321 start.go:83] releasing machines lock for "addons-436248", held for 26.54806704s
	I0717 21:41:15.846415   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:15.846660   23321 main.go:141] libmachine: (addons-436248) Calling .GetIP
	I0717 21:41:15.849140   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.849446   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:15.849478   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.849621   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:15.850246   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:15.850457   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:15.850554   23321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 21:41:15.850629   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:15.850690   23321 ssh_runner.go:195] Run: cat /version.json
	I0717 21:41:15.850720   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:15.853226   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.853254   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.853645   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:15.853682   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:15.853708   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.853727   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:15.853896   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:15.853979   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:15.854074   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:15.854141   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:15.854193   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:15.854258   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:15.854325   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:15.854378   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:15.950356   23321 ssh_runner.go:195] Run: systemctl --version
	I0717 21:41:16.004383   23321 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 21:41:16.164955   23321 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 21:41:16.171174   23321 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 21:41:16.171264   23321 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:41:16.185968   23321 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 21:41:16.185995   23321 start.go:466] detecting cgroup driver to use...
	I0717 21:41:16.186062   23321 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 21:41:16.199567   23321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 21:41:16.211947   23321 docker.go:196] disabling cri-docker service (if available) ...
	I0717 21:41:16.212014   23321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 21:41:16.224196   23321 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 21:41:16.236467   23321 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 21:41:16.343035   23321 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 21:41:16.468921   23321 docker.go:212] disabling docker service ...
	I0717 21:41:16.469001   23321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 21:41:16.483060   23321 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 21:41:16.495410   23321 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 21:41:16.613328   23321 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 21:41:16.730198   23321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 21:41:16.743059   23321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 21:41:16.760138   23321 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 21:41:16.760217   23321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:41:16.770184   23321 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 21:41:16.770264   23321 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:41:16.780085   23321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:41:16.790152   23321 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:41:16.800277   23321 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 21:41:16.810516   23321 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 21:41:16.819106   23321 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 21:41:16.819168   23321 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 21:41:16.832719   23321 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 21:41:16.841830   23321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 21:41:16.957683   23321 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 21:41:17.124168   23321 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 21:41:17.124251   23321 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 21:41:17.129222   23321 start.go:534] Will wait 60s for crictl version
	I0717 21:41:17.129296   23321 ssh_runner.go:195] Run: which crictl
	I0717 21:41:17.133086   23321 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 21:41:17.166220   23321 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 21:41:17.166331   23321 ssh_runner.go:195] Run: crio --version
	I0717 21:41:17.215767   23321 ssh_runner.go:195] Run: crio --version
	I0717 21:41:17.264563   23321 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 21:41:17.266036   23321 main.go:141] libmachine: (addons-436248) Calling .GetIP
	I0717 21:41:17.268662   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:17.269048   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:17.269078   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:17.269240   23321 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 21:41:17.273211   23321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:41:17.285533   23321 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:41:17.285602   23321 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:41:17.312254   23321 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 21:41:17.312326   23321 ssh_runner.go:195] Run: which lz4
	I0717 21:41:17.316034   23321 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 21:41:17.320020   23321 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 21:41:17.320047   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 21:41:19.033160   23321 crio.go:444] Took 1.717159 seconds to copy over tarball
	I0717 21:41:19.033256   23321 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 21:41:21.973915   23321 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.940629935s)
	I0717 21:41:21.973938   23321 crio.go:451] Took 2.940754 seconds to extract the tarball
	I0717 21:41:21.973957   23321 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 21:41:22.013198   23321 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:41:22.070875   23321 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 21:41:22.070896   23321 cache_images.go:84] Images are preloaded, skipping loading
	I0717 21:41:22.070955   23321 ssh_runner.go:195] Run: crio config
	I0717 21:41:22.130619   23321 cni.go:84] Creating CNI manager for ""
	I0717 21:41:22.130648   23321 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 21:41:22.130660   23321 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 21:41:22.130676   23321 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-436248 NodeName:addons-436248 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 21:41:22.130815   23321 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-436248"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 21:41:22.130890   23321 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-436248 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-436248 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 21:41:22.130944   23321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 21:41:22.139817   23321 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 21:41:22.139889   23321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 21:41:22.148007   23321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0717 21:41:22.163430   23321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 21:41:22.178703   23321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0717 21:41:22.194044   23321 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I0717 21:41:22.197589   23321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:41:22.209606   23321 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248 for IP: 192.168.39.220
	I0717 21:41:22.209642   23321 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:22.209808   23321 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 21:41:22.274776   23321 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt ...
	I0717 21:41:22.274802   23321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt: {Name:mk723a7808bbedb8e0fd3e5eee1f01222840b151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:22.274950   23321 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key ...
	I0717 21:41:22.274960   23321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key: {Name:mk44f27b71d2661dbebe2fabf21e65ed35ca0c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:22.275038   23321 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 21:41:22.417504   23321 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt ...
	I0717 21:41:22.417550   23321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt: {Name:mk0c487bd06bd62988405615daed2c47942d6d8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:22.417705   23321 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key ...
	I0717 21:41:22.417715   23321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key: {Name:mkcaab8f1562f308895da7be8898a2d4986be7e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:22.417823   23321 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.key
	I0717 21:41:22.417837   23321 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt with IP's: []
	I0717 21:41:22.550832   23321 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt ...
	I0717 21:41:22.550857   23321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: {Name:mk26d82dee3ea9f56bcc9f274ffe122135c47e66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:22.550993   23321 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.key ...
	I0717 21:41:22.551002   23321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.key: {Name:mk8e61b75376b96c3f50a85e6a5b60b51396ab4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:22.551062   23321 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/apiserver.key.14dd3156
	I0717 21:41:22.551077   23321 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/apiserver.crt.14dd3156 with IP's: [192.168.39.220 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 21:41:22.824922   23321 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/apiserver.crt.14dd3156 ...
	I0717 21:41:22.824951   23321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/apiserver.crt.14dd3156: {Name:mkf72a89b7663df58d884aacaa15c0fa96fa5cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:22.825094   23321 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/apiserver.key.14dd3156 ...
	I0717 21:41:22.825105   23321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/apiserver.key.14dd3156: {Name:mk403da37ed6cacfecd0cf57be00e52d6eb1e427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:22.825165   23321 certs.go:337] copying /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/apiserver.crt.14dd3156 -> /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/apiserver.crt
	I0717 21:41:22.825225   23321 certs.go:341] copying /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/apiserver.key.14dd3156 -> /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/apiserver.key
	I0717 21:41:22.825265   23321 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/proxy-client.key
	I0717 21:41:22.825280   23321 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/proxy-client.crt with IP's: []
	I0717 21:41:22.923892   23321 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/proxy-client.crt ...
	I0717 21:41:22.923929   23321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/proxy-client.crt: {Name:mk49044d29461ad6c0eee7336c66822483ba031b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:22.924093   23321 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/proxy-client.key ...
	I0717 21:41:22.924105   23321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/proxy-client.key: {Name:mk72cbcbb81e1b7aa02c034bc53d22b18bbff785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:22.924301   23321 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 21:41:22.924343   23321 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 21:41:22.924367   23321 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 21:41:22.924397   23321 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 21:41:22.925015   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 21:41:22.949997   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 21:41:22.974210   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 21:41:22.998485   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 21:41:23.022461   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 21:41:23.046028   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 21:41:23.068939   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 21:41:23.092604   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 21:41:23.116641   23321 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 21:41:23.140371   23321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 21:41:23.156788   23321 ssh_runner.go:195] Run: openssl version
	I0717 21:41:23.162535   23321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 21:41:23.172239   23321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:41:23.176879   23321 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:41:23.176931   23321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:41:23.182862   23321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 21:41:23.192518   23321 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 21:41:23.196710   23321 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 21:41:23.196758   23321 kubeadm.go:404] StartCluster: {Name:addons-436248 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-436248 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:41:23.196842   23321 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 21:41:23.196885   23321 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 21:41:23.225968   23321 cri.go:89] found id: ""
	I0717 21:41:23.226033   23321 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 21:41:23.235106   23321 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 21:41:23.244590   23321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 21:41:23.253489   23321 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 21:41:23.253554   23321 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 21:41:23.434940   23321 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 21:41:35.270750   23321 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 21:41:35.270814   23321 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 21:41:35.270917   23321 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 21:41:35.271058   23321 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 21:41:35.271200   23321 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 21:41:35.271289   23321 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 21:41:35.273011   23321 out.go:204]   - Generating certificates and keys ...
	I0717 21:41:35.273102   23321 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 21:41:35.273193   23321 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 21:41:35.273301   23321 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 21:41:35.273374   23321 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 21:41:35.273461   23321 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 21:41:35.273544   23321 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 21:41:35.273617   23321 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 21:41:35.273765   23321 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-436248 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I0717 21:41:35.273823   23321 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 21:41:35.273988   23321 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-436248 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I0717 21:41:35.274098   23321 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 21:41:35.274190   23321 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 21:41:35.274251   23321 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 21:41:35.274343   23321 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 21:41:35.274435   23321 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 21:41:35.274512   23321 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 21:41:35.274606   23321 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 21:41:35.274691   23321 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 21:41:35.274843   23321 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 21:41:35.274963   23321 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 21:41:35.275002   23321 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 21:41:35.275076   23321 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 21:41:35.276616   23321 out.go:204]   - Booting up control plane ...
	I0717 21:41:35.276739   23321 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 21:41:35.276852   23321 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 21:41:35.276920   23321 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 21:41:35.277017   23321 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 21:41:35.277173   23321 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 21:41:35.277249   23321 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005921 seconds
	I0717 21:41:35.277360   23321 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 21:41:35.277527   23321 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 21:41:35.277622   23321 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 21:41:35.277886   23321 kubeadm.go:322] [mark-control-plane] Marking the node addons-436248 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 21:41:35.277978   23321 kubeadm.go:322] [bootstrap-token] Using token: 9bkcy0.ksclf2mv09uj2sqq
	I0717 21:41:35.279392   23321 out.go:204]   - Configuring RBAC rules ...
	I0717 21:41:35.279511   23321 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 21:41:35.279609   23321 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 21:41:35.279865   23321 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 21:41:35.279972   23321 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 21:41:35.280139   23321 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 21:41:35.280252   23321 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 21:41:35.280376   23321 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 21:41:35.280417   23321 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 21:41:35.280453   23321 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 21:41:35.280458   23321 kubeadm.go:322] 
	I0717 21:41:35.280538   23321 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 21:41:35.280552   23321 kubeadm.go:322] 
	I0717 21:41:35.280659   23321 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 21:41:35.280670   23321 kubeadm.go:322] 
	I0717 21:41:35.280702   23321 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 21:41:35.280762   23321 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 21:41:35.280808   23321 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 21:41:35.280814   23321 kubeadm.go:322] 
	I0717 21:41:35.280854   23321 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 21:41:35.280860   23321 kubeadm.go:322] 
	I0717 21:41:35.280916   23321 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 21:41:35.280922   23321 kubeadm.go:322] 
	I0717 21:41:35.280979   23321 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 21:41:35.281079   23321 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 21:41:35.281177   23321 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 21:41:35.281188   23321 kubeadm.go:322] 
	I0717 21:41:35.281250   23321 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 21:41:35.281311   23321 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 21:41:35.281317   23321 kubeadm.go:322] 
	I0717 21:41:35.281423   23321 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9bkcy0.ksclf2mv09uj2sqq \
	I0717 21:41:35.281592   23321 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 21:41:35.281628   23321 kubeadm.go:322] 	--control-plane 
	I0717 21:41:35.281637   23321 kubeadm.go:322] 
	I0717 21:41:35.281736   23321 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 21:41:35.281752   23321 kubeadm.go:322] 
	I0717 21:41:35.281874   23321 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9bkcy0.ksclf2mv09uj2sqq \
	I0717 21:41:35.282020   23321 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 21:41:35.282034   23321 cni.go:84] Creating CNI manager for ""
	I0717 21:41:35.282045   23321 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 21:41:35.283628   23321 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 21:41:35.284892   23321 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 21:41:35.351684   23321 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 21:41:35.422630   23321 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 21:41:35.422699   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:35.422793   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=addons-436248 minikube.k8s.io/updated_at=2023_07_17T21_41_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:35.610629   23321 ops.go:34] apiserver oom_adj: -16
	I0717 21:41:35.627320   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:36.259943   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:36.760083   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:37.260256   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:37.760302   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:38.260305   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:38.760189   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:39.260211   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:39.759329   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:40.260068   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:40.759369   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:41.259250   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:41.759466   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:42.259497   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:42.759278   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:43.260140   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:43.760313   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:44.259628   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:44.759648   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:45.260224   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:45.759906   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:46.259928   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:46.759474   23321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:41:46.850766   23321 kubeadm.go:1081] duration metric: took 11.428119997s to wait for elevateKubeSystemPrivileges.
	I0717 21:41:46.850794   23321 kubeadm.go:406] StartCluster complete in 23.654036769s
	I0717 21:41:46.850814   23321 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:46.850962   23321 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 21:41:46.851375   23321 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:41:46.851547   23321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 21:41:46.851584   23321 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0717 21:41:46.851719   23321 addons.go:69] Setting volumesnapshots=true in profile "addons-436248"
	I0717 21:41:46.851731   23321 addons.go:69] Setting cloud-spanner=true in profile "addons-436248"
	I0717 21:41:46.851745   23321 addons.go:231] Setting addon volumesnapshots=true in "addons-436248"
	I0717 21:41:46.851750   23321 addons.go:231] Setting addon cloud-spanner=true in "addons-436248"
	I0717 21:41:46.851759   23321 config.go:182] Loaded profile config "addons-436248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:41:46.851753   23321 addons.go:69] Setting default-storageclass=true in profile "addons-436248"
	I0717 21:41:46.851766   23321 addons.go:69] Setting metrics-server=true in profile "addons-436248"
	I0717 21:41:46.851785   23321 addons.go:231] Setting addon metrics-server=true in "addons-436248"
	I0717 21:41:46.851780   23321 addons.go:69] Setting ingress-dns=true in profile "addons-436248"
	I0717 21:41:46.851796   23321 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-436248"
	I0717 21:41:46.851798   23321 addons.go:69] Setting gcp-auth=true in profile "addons-436248"
	I0717 21:41:46.851802   23321 addons.go:69] Setting registry=true in profile "addons-436248"
	I0717 21:41:46.851722   23321 addons.go:69] Setting ingress=true in profile "addons-436248"
	I0717 21:41:46.851808   23321 addons.go:231] Setting addon ingress-dns=true in "addons-436248"
	I0717 21:41:46.851813   23321 addons.go:231] Setting addon registry=true in "addons-436248"
	I0717 21:41:46.851815   23321 addons.go:231] Setting addon ingress=true in "addons-436248"
	I0717 21:41:46.851817   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:46.851828   23321 addons.go:69] Setting inspektor-gadget=true in profile "addons-436248"
	I0717 21:41:46.851829   23321 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-436248"
	I0717 21:41:46.851837   23321 addons.go:231] Setting addon inspektor-gadget=true in "addons-436248"
	I0717 21:41:46.851860   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:46.851803   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:46.851864   23321 addons.go:69] Setting storage-provisioner=true in profile "addons-436248"
	I0717 21:41:46.851885   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:46.851814   23321 mustload.go:65] Loading cluster: addons-436248
	I0717 21:41:46.851792   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:46.852122   23321 config.go:182] Loaded profile config "addons-436248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:41:46.851892   23321 addons.go:231] Setting addon storage-provisioner=true in "addons-436248"
	I0717 21:41:46.852309   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:46.852326   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.851788   23321 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-436248"
	I0717 21:41:46.852352   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.852384   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:46.852498   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.852887   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.851862   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:46.852311   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.853086   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.852702   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.853165   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.852311   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.851818   23321 addons.go:69] Setting helm-tiller=true in profile "addons-436248"
	I0717 21:41:46.853220   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.853227   23321 addons.go:231] Setting addon helm-tiller=true in "addons-436248"
	I0717 21:41:46.853267   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:46.853309   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.851868   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:46.853329   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.852724   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.853372   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.852752   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.853450   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.852802   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.853494   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.852844   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.853556   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.853645   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.853681   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.853712   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.853747   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.873372   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43297
	I0717 21:41:46.873405   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
	I0717 21:41:46.873379   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35399
	I0717 21:41:46.873552   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43789
	I0717 21:41:46.876840   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.876889   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.876840   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.876967   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.877444   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.877468   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.877485   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.877509   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.877840   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.877857   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.877953   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.878202   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.878257   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.878505   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.878520   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.878780   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.878811   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.878951   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.878962   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.878970   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.879142   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.880120   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.880205   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.880477   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I0717 21:41:46.881254   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.881776   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.881792   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.882185   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.882856   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.882905   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.884800   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0717 21:41:46.885144   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.885771   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.885793   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.886143   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.886333   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0717 21:41:46.886635   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.886670   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.886699   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.887054   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.887073   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.887411   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.887932   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.887980   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.892651   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37665
	I0717 21:41:46.893770   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.894219   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.894240   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.894640   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.894868   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.896626   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:46.897013   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.897046   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.898015   23321 addons.go:231] Setting addon default-storageclass=true in "addons-436248"
	I0717 21:41:46.898061   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:46.898425   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.898455   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.904177   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0717 21:41:46.904578   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.905087   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.905110   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.905451   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.906084   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.906127   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.907913   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
	I0717 21:41:46.908243   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.908386   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I0717 21:41:46.908700   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.908721   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.908779   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.909227   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.909247   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.909310   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.909666   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.910052   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.910080   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.910510   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.910537   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.915617   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36133
	I0717 21:41:46.916008   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.916134   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0717 21:41:46.916506   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.916686   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.916704   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.917391   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I0717 21:41:46.917653   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.917670   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.917704   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.917673   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.917900   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.918557   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.918575   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.918607   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.919046   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.919162   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.919203   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.919268   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.920489   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:46.922194   23321 out.go:177]   - Using image docker.io/registry:2.8.1
	I0717 21:41:46.921002   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:46.922836   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44929
	I0717 21:41:46.924637   23321 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0717 21:41:46.926032   23321 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0717 21:41:46.924014   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.925988   23321 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 21:41:46.927180   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 21:41:46.927203   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:46.927231   23321 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0717 21:41:46.927241   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0717 21:41:46.927257   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:46.929192   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.929209   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.930525   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.931134   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.931426   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.934229   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.934780   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:46.934813   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.935158   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:46.935757   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:46.935946   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:46.936102   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:46.936856   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:46.936912   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:46.936927   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.936955   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:46.938496   23321 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 21:41:46.937097   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:46.939612   23321 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 21:41:46.939625   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 21:41:46.939649   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:46.939778   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:46.939944   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:46.940946   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0717 21:41:46.942400   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46649
	I0717 21:41:46.942927   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.943429   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.943441   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.943748   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.943945   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.944078   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.944470   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:46.944493   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.944745   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:46.944912   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.944997   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:46.945144   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:46.945302   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:46.946048   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.946070   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.946151   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I0717 21:41:46.946699   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34415
	I0717 21:41:46.946709   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.946935   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.947243   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.947284   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.947302   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.947491   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.947676   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.947735   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.947747   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.947843   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:46.949551   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:46.949573   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:46.949594   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.951242   23321 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0717 21:41:46.949956   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.952880   23321 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 21:41:46.952869   23321 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 21:41:46.954320   23321 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 21:41:46.954324   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 21:41:46.954348   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:46.955008   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:46.955784   23321 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0717 21:41:46.957222   23321 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 21:41:46.957242   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0717 21:41:46.957262   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:46.955566   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0717 21:41:46.958596   23321 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.18.1
	I0717 21:41:46.960449   23321 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 21:41:46.960465   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 21:41:46.960475   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0717 21:41:46.960482   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:46.958655   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I0717 21:41:46.958265   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I0717 21:41:46.958695   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.958824   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.960720   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:46.960741   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.959494   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:46.961099   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.961103   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:46.961169   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.961185   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.961260   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.961399   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:46.961554   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:46.961708   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.961763   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.961910   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.961924   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.961983   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:46.962028   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:46.962044   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.962072   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.962304   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.962327   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.962306   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:46.962809   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:46.962839   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:46.963086   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:46.963350   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:46.964538   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.964572   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:46.964638   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.964639   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.964650   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.964651   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.964682   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:46.964700   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.964924   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:46.966302   23321 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0717 21:41:46.965025   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.965047   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.965049   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:46.967528   23321 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 21:41:46.967546   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 21:41:46.967563   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:46.967647   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.967701   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:46.967727   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.968033   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34287
	I0717 21:41:46.968190   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:46.968537   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.969341   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.969364   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.969675   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:46.969779   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.971131   23321 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 21:41:46.970007   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.970264   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:46.972344   23321 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 21:41:46.972358   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 21:41:46.972372   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:46.973507   23321 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:41:46.971472   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.972217   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:46.974079   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:46.974553   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:46.974576   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.974590   23321 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:41:46.974604   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 21:41:46.974619   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:46.974685   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:46.975800   23321 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 21:41:46.974809   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:46.976369   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.977929   23321 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 21:41:46.976881   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:46.979040   23321 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 21:41:46.977060   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:46.977828   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.977960   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.977034   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:46.978422   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:46.980285   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:46.981606   23321 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 21:41:46.980313   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.980465   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:46.980476   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:46.982828   23321 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 21:41:46.984082   23321 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 21:41:46.983014   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:46.983043   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:46.984293   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0717 21:41:46.986433   23321 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 21:41:46.985558   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:46.985611   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:46.985806   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:46.987627   23321 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 21:41:46.989206   23321 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 21:41:46.989221   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 21:41:46.989239   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:46.988053   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:46.989264   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:46.989883   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:46.990209   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:46.992171   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:46.992404   23321 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 21:41:46.992419   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 21:41:46.992432   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:46.992820   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.993251   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:46.993275   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.993595   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:46.993748   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:46.993865   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:46.993971   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:46.995219   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.995516   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:46.995529   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:46.995715   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:46.995891   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:46.995988   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:46.996090   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	W0717 21:41:46.997117   23321 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:46524->192.168.39.220:22: read: connection reset by peer
	I0717 21:41:46.997152   23321 retry.go:31] will retry after 152.828843ms: ssh: handshake failed: read tcp 192.168.39.1:46524->192.168.39.220:22: read: connection reset by peer
	I0717 21:41:47.175865   23321 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 21:41:47.175897   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 21:41:47.205540   23321 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 21:41:47.205562   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 21:41:47.245013   23321 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 21:41:47.245041   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 21:41:47.249036   23321 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 21:41:47.249053   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 21:41:47.262549   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 21:41:47.273026   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 21:41:47.278802   23321 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 21:41:47.278824   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 21:41:47.310784   23321 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 21:41:47.310808   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 21:41:47.315998   23321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 21:41:47.331065   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 21:41:47.362804   23321 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 21:41:47.362827   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 21:41:47.363935   23321 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 21:41:47.363949   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 21:41:47.367581   23321 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 21:41:47.367596   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 21:41:47.369338   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:41:47.386927   23321 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 21:41:47.386952   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 21:41:47.387979   23321 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 21:41:47.388001   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 21:41:47.448409   23321 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-436248" context rescaled to 1 replicas
	I0717 21:41:47.448446   23321 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 21:41:47.451254   23321 out.go:177] * Verifying Kubernetes components...
	I0717 21:41:47.452593   23321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:41:47.502876   23321 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 21:41:47.502893   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 21:41:47.567985   23321 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 21:41:47.568009   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 21:41:47.586144   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 21:41:47.595815   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 21:41:47.596309   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 21:41:47.609904   23321 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 21:41:47.609920   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 21:41:47.661801   23321 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 21:41:47.661826   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 21:41:47.676858   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 21:41:47.739397   23321 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 21:41:47.739417   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 21:41:47.778775   23321 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 21:41:47.778800   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 21:41:47.797962   23321 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 21:41:47.797985   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 21:41:47.866239   23321 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 21:41:47.866262   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 21:41:47.919876   23321 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 21:41:47.919900   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 21:41:47.938129   23321 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 21:41:47.938154   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 21:41:47.980575   23321 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 21:41:47.980596   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 21:41:48.018888   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 21:41:48.037142   23321 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 21:41:48.037168   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 21:41:48.084439   23321 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 21:41:48.084459   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0717 21:41:48.100132   23321 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 21:41:48.100160   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 21:41:48.158653   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 21:41:48.189188   23321 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 21:41:48.189207   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 21:41:48.268757   23321 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 21:41:48.268783   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 21:41:48.314547   23321 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 21:41:48.314565   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 21:41:48.355805   23321 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 21:41:48.355826   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 21:41:48.398207   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 21:41:52.819910   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.557320223s)
	I0717 21:41:52.819954   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:52.819964   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:52.820259   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:52.820310   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:52.820335   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:52.820349   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:52.820557   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:52.820575   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:53.679998   23321 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 21:41:53.680036   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:53.683535   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:53.683927   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:53.683960   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:53.684181   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:53.684408   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:53.684577   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:53.684748   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:53.858154   23321 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 21:41:53.892320   23321 addons.go:231] Setting addon gcp-auth=true in "addons-436248"
	I0717 21:41:53.892366   23321 host.go:66] Checking if "addons-436248" exists ...
	I0717 21:41:53.892659   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:53.892702   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:53.907486   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0717 21:41:53.907920   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:53.908438   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:53.908465   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:53.908798   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:53.909230   23321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:41:53.909278   23321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:41:53.924546   23321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44333
	I0717 21:41:53.924943   23321 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:41:53.925348   23321 main.go:141] libmachine: Using API Version  1
	I0717 21:41:53.925372   23321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:41:53.925776   23321 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:41:53.925958   23321 main.go:141] libmachine: (addons-436248) Calling .GetState
	I0717 21:41:53.927826   23321 main.go:141] libmachine: (addons-436248) Calling .DriverName
	I0717 21:41:53.928064   23321 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 21:41:53.928165   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHHostname
	I0717 21:41:53.930986   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:53.931431   23321 main.go:141] libmachine: (addons-436248) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:fd:1c", ip: ""} in network mk-addons-436248: {Iface:virbr1 ExpiryTime:2023-07-17 22:41:04 +0000 UTC Type:0 Mac:52:54:00:62:fd:1c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-436248 Clientid:01:52:54:00:62:fd:1c}
	I0717 21:41:53.931468   23321 main.go:141] libmachine: (addons-436248) DBG | domain addons-436248 has defined IP address 192.168.39.220 and MAC address 52:54:00:62:fd:1c in network mk-addons-436248
	I0717 21:41:53.931593   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHPort
	I0717 21:41:53.931783   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHKeyPath
	I0717 21:41:53.931981   23321 main.go:141] libmachine: (addons-436248) Calling .GetSSHUsername
	I0717 21:41:53.932143   23321 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/addons-436248/id_rsa Username:docker}
	I0717 21:41:56.302664   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.029602943s)
	I0717 21:41:56.302698   23321 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.986660593s)
	I0717 21:41:56.302726   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.302728   23321 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 21:41:56.302741   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.302756   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.971656689s)
	I0717 21:41:56.302793   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.302808   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.302856   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.933492509s)
	I0717 21:41:56.302885   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.302896   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.302895   23321 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (8.850275572s)
	I0717 21:41:56.302968   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.716798182s)
	I0717 21:41:56.303025   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.303339   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.303372   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.144687268s)
	I0717 21:41:56.303391   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.303407   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.303032   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.707184644s)
	I0717 21:41:56.303441   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.303451   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.303092   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.7067481s)
	I0717 21:41:56.303508   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.303516   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.303177   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.626287276s)
	I0717 21:41:56.303552   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.303562   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.303233   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.303254   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.303257   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.303267   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.303603   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.303611   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.303284   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.303619   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.303629   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.303639   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.303648   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.303298   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.303682   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.303695   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.303705   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.303305   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.284385763s)
	W0717 21:41:56.303777   23321 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 21:41:56.303809   23321 retry.go:31] will retry after 253.568955ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 21:41:56.306662   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.306673   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.306689   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.306701   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.306707   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.306711   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.306715   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.306724   23321 addons.go:467] Verifying addon ingress=true in "addons-436248"
	I0717 21:41:56.308691   23321 out.go:177] * Verifying ingress addon...
	I0717 21:41:56.306794   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.306822   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.306845   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.306867   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.306900   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.306924   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.306938   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.306942   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.306968   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.306986   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.307005   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.307207   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.307357   23321 node_ready.go:35] waiting up to 6m0s for node "addons-436248" to be "Ready" ...
	I0717 21:41:56.307499   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.307514   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.310233   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.310238   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.310246   23321 addons.go:467] Verifying addon registry=true in "addons-436248"
	I0717 21:41:56.310263   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.310282   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.310286   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.311857   23321 out.go:177] * Verifying registry addon...
	I0717 21:41:56.310301   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.310304   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.310291   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.313396   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.313407   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.310313   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.313487   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.313504   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.310977   23321 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 21:41:56.313663   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.313676   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.313725   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.311883   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.313748   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.313748   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.313762   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.313792   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:56.313801   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:56.313923   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.313937   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.314039   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:56.314065   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.314078   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.314179   23321 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 21:41:56.316945   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:56.316963   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:56.316972   23321 addons.go:467] Verifying addon metrics-server=true in "addons-436248"
	I0717 21:41:56.351517   23321 node_ready.go:49] node "addons-436248" has status "Ready":"True"
	I0717 21:41:56.351545   23321 node_ready.go:38] duration metric: took 41.251092ms waiting for node "addons-436248" to be "Ready" ...
	I0717 21:41:56.351556   23321 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:41:56.353379   23321 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 21:41:56.353395   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:41:56.353497   23321 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 21:41:56.353512   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:41:56.397893   23321 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-hwhfp" in "kube-system" namespace to be "Ready" ...
	I0717 21:41:56.412358   23321 pod_ready.go:97] pod "coredns-5d78c9869d-hwhfp" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 21:41:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 21:41:48 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 21:41:48 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 21:41:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.220 PodIP: PodIPs:[] StartTime:2023-07-17 21:41:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,
Reason:Error,Message:,StartedAt:2023-07-17 21:41:55 +0000 UTC,FinishedAt:2023-07-17 21:41:55 +0000 UTC,ContainerID:cri-o://f47b418513648fb3d6f10eb3ace34ab0e042ea520116e94203f1f149ca6379a5,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://f47b418513648fb3d6f10eb3ace34ab0e042ea520116e94203f1f149ca6379a5 Started:0xc0013cfcbc AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0717 21:41:56.412384   23321 pod_ready.go:81] duration metric: took 14.464987ms waiting for pod "coredns-5d78c9869d-hwhfp" in "kube-system" namespace to be "Ready" ...
	E0717 21:41:56.412393   23321 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5d78c9869d-hwhfp" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 21:41:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 21:41:48 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 21:41:48 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 21:41:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.220 PodIP: PodIPs:[] StartTime:2023-07-17 21:41:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTe
rminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-07-17 21:41:55 +0000 UTC,FinishedAt:2023-07-17 21:41:55 +0000 UTC,ContainerID:cri-o://f47b418513648fb3d6f10eb3ace34ab0e042ea520116e94203f1f149ca6379a5,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://f47b418513648fb3d6f10eb3ace34ab0e042ea520116e94203f1f149ca6379a5 Started:0xc0013cfcbc AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0717 21:41:56.412400   23321 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace to be "Ready" ...
	I0717 21:41:56.558455   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 21:41:56.894992   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:41:56.921702   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:41:57.326752   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.928494713s)
	I0717 21:41:57.326793   23321 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.398707529s)
	I0717 21:41:57.326806   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:57.326818   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:57.329049   23321 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 21:41:57.327115   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:57.329090   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:57.329102   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:57.329113   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:57.327121   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:57.329376   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:57.329401   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:57.330875   23321 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0717 21:41:57.330878   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:57.332905   23321 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 21:41:57.332919   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 21:41:57.330894   23321 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-436248"
	I0717 21:41:57.334855   23321 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 21:41:57.336839   23321 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 21:41:57.366672   23321 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 21:41:57.366701   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:41:57.375340   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:41:57.376300   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:41:57.386698   23321 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 21:41:57.386724   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 21:41:57.802631   23321 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 21:41:57.802656   23321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0717 21:41:57.865696   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:41:57.896252   23321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 21:41:57.905698   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:41:57.991691   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:41:58.507436   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:41:58.583386   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:41:58.583538   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:41:58.603715   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:41:58.864035   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:41:58.867542   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:41:58.879626   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:41:59.367633   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:41:59.367672   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:41:59.377959   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:41:59.773928   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.215424039s)
	I0717 21:41:59.773983   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:59.773997   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:59.774280   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:59.774339   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:59.774353   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:41:59.774364   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:41:59.774286   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:41:59.774617   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:41:59.774636   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:41:59.859358   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:41:59.868774   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:41:59.882942   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:00.265687   23321 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.369388886s)
	I0717 21:42:00.265748   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:42:00.265762   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:42:00.266124   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:42:00.266192   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:42:00.266208   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:42:00.266223   23321 main.go:141] libmachine: Making call to close driver server
	I0717 21:42:00.266236   23321 main.go:141] libmachine: (addons-436248) Calling .Close
	I0717 21:42:00.266478   23321 main.go:141] libmachine: (addons-436248) DBG | Closing plugin on server side
	I0717 21:42:00.266509   23321 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:42:00.266527   23321 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:42:00.267582   23321 addons.go:467] Verifying addon gcp-auth=true in "addons-436248"
	I0717 21:42:00.269333   23321 out.go:177] * Verifying gcp-auth addon...
	I0717 21:42:00.271789   23321 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 21:42:00.313424   23321 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 21:42:00.313442   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:00.387070   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:00.387460   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:00.394032   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:00.831882   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:00.861230   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:00.862893   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:00.877226   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:00.973895   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:01.317618   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:01.359811   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:01.363080   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:01.374971   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:01.818059   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:01.869151   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:01.871525   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:01.875385   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:02.317859   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:02.359286   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:02.359586   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:02.372806   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:02.819513   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:02.861340   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:02.861403   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:02.871712   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:02.974605   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:03.337436   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:03.375956   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:03.390866   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:03.394894   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:03.817567   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:03.860311   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:03.861446   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:03.875641   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:04.336838   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:04.365424   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:04.365503   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:04.387273   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:04.817846   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:04.868273   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:04.871975   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:04.876353   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:05.317554   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:05.360217   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:05.362012   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:05.372358   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:05.461048   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:05.817691   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:05.876237   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:05.879954   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:05.884667   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:06.317547   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:06.362601   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:06.362666   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:06.376051   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:06.819023   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:06.860833   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:06.866330   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:06.873230   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:07.318144   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:07.365770   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:07.366266   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:07.379249   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:07.820268   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:07.886797   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:07.897951   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:07.914845   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:08.262557   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:08.317676   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:08.360278   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:08.361369   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:08.375128   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:08.826490   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:08.859531   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:08.861990   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:08.875511   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:09.317879   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:09.363731   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:09.364063   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:09.373695   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:09.823194   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:09.873478   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:09.879213   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:09.880596   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:10.318334   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:10.364300   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:10.365440   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:10.374978   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:10.461273   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:10.823850   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:10.860798   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:10.861046   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:10.875695   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:11.317343   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:11.360476   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:11.361211   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:11.373868   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:11.817410   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:11.859953   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:11.864008   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:11.891946   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:12.317220   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:12.360380   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:12.363253   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:12.377015   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:12.828217   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:12.868820   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:12.884954   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:12.891905   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:12.970419   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:13.323075   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:13.358961   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:13.360048   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:13.372741   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:13.818250   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:13.863051   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:13.866493   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:13.884746   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:14.323389   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:14.360742   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:14.369906   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:14.376567   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:14.818360   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:14.860439   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:14.861051   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:14.873792   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:15.316992   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:15.359998   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:15.361151   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:15.372652   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:15.459195   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:15.817898   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:15.864465   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:15.864761   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:15.871707   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:16.317425   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:16.360053   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:16.361185   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:16.373030   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:16.817105   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:16.860525   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:16.862652   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:16.874989   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:17.318643   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:17.362173   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:17.364263   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:17.416486   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:17.480513   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:17.819376   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:17.860752   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:17.861316   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:17.872424   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:18.317526   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:18.359009   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:18.359459   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:18.372471   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:18.820983   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:18.860303   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:18.861082   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:18.871221   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:19.317876   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:19.359448   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:19.359958   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:19.372723   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:19.818179   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:19.858661   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:19.859495   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:19.872441   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:19.960555   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:20.317681   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:20.359323   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:20.360236   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:20.373492   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:20.818584   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:20.859564   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:20.859782   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:20.872791   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:21.317682   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:21.360582   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:21.361160   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:21.372110   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:21.817283   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:21.861221   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:21.861482   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:21.872411   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:21.964172   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:22.318790   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:22.406858   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:22.412284   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:22.412440   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:22.817610   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:22.858589   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:22.858899   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:22.872376   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:23.537438   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:23.538272   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:23.538578   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:23.539981   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:23.818015   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:23.858703   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:23.859408   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:23.872297   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:24.317282   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:24.359220   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:24.359317   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:24.372295   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:24.460138   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:24.818543   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:24.862179   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:24.862447   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:24.873120   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:25.318185   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:25.359324   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:25.360119   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:25.372828   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:25.833633   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:25.859380   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:25.859511   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:25.878923   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:26.317500   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:26.364497   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:26.364525   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:26.375111   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:26.470686   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:26.817369   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:26.860006   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:26.860062   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:26.874287   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:27.318269   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:27.362609   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:27.362658   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:27.373075   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:27.821147   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:27.861758   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:27.862415   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:27.875799   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:28.323776   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:28.369575   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:28.369925   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:28.378505   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:29.006773   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:29.007090   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:29.009805   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:29.011578   23321 pod_ready.go:102] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:29.011797   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:29.318065   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:29.360501   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:29.361620   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:29.372340   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:29.822652   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:29.859452   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:29.859512   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:29.872460   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:30.317663   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:30.359989   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:30.360659   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:30.373767   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:30.493568   23321 pod_ready.go:92] pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace has status "Ready":"True"
	I0717 21:42:30.493597   23321 pod_ready.go:81] duration metric: took 34.081188505s waiting for pod "coredns-5d78c9869d-t7knm" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:30.493614   23321 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-436248" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:30.514019   23321 pod_ready.go:92] pod "etcd-addons-436248" in "kube-system" namespace has status "Ready":"True"
	I0717 21:42:30.514052   23321 pod_ready.go:81] duration metric: took 20.427045ms waiting for pod "etcd-addons-436248" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:30.514065   23321 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-436248" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:30.528774   23321 pod_ready.go:92] pod "kube-apiserver-addons-436248" in "kube-system" namespace has status "Ready":"True"
	I0717 21:42:30.528799   23321 pod_ready.go:81] duration metric: took 14.726588ms waiting for pod "kube-apiserver-addons-436248" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:30.528815   23321 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-436248" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:30.546919   23321 pod_ready.go:92] pod "kube-controller-manager-addons-436248" in "kube-system" namespace has status "Ready":"True"
	I0717 21:42:30.546941   23321 pod_ready.go:81] duration metric: took 18.119279ms waiting for pod "kube-controller-manager-addons-436248" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:30.546959   23321 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sc8ph" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:30.579910   23321 pod_ready.go:92] pod "kube-proxy-sc8ph" in "kube-system" namespace has status "Ready":"True"
	I0717 21:42:30.579930   23321 pod_ready.go:81] duration metric: took 32.965653ms waiting for pod "kube-proxy-sc8ph" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:30.579939   23321 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-436248" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:30.819991   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:30.857552   23321 pod_ready.go:92] pod "kube-scheduler-addons-436248" in "kube-system" namespace has status "Ready":"True"
	I0717 21:42:30.857572   23321 pod_ready.go:81] duration metric: took 277.626826ms waiting for pod "kube-scheduler-addons-436248" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:30.857581   23321 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-7bttp" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:30.861102   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:30.861264   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:30.874701   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:31.319411   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:31.360584   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:31.360585   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:31.372633   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:31.819973   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:31.859715   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:31.860056   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:31.873418   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:32.320498   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:32.385395   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:32.386388   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:32.386621   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:32.824147   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:32.880905   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:32.882589   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:32.884511   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:33.280339   23321 pod_ready.go:102] pod "metrics-server-844d8db974-7bttp" in "kube-system" namespace has status "Ready":"False"
	I0717 21:42:33.325770   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:33.363104   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:33.364254   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:33.372713   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:33.818248   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:33.859123   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:33.864887   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:33.878275   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:34.339608   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:34.365583   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:34.365892   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:34.373148   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:34.765097   23321 pod_ready.go:92] pod "metrics-server-844d8db974-7bttp" in "kube-system" namespace has status "Ready":"True"
	I0717 21:42:34.765119   23321 pod_ready.go:81] duration metric: took 3.907531789s waiting for pod "metrics-server-844d8db974-7bttp" in "kube-system" namespace to be "Ready" ...
	I0717 21:42:34.765137   23321 pod_ready.go:38] duration metric: took 38.413557713s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:42:34.765158   23321 api_server.go:52] waiting for apiserver process to appear ...
	I0717 21:42:34.765200   23321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 21:42:34.803538   23321 api_server.go:72] duration metric: took 47.355056111s to wait for apiserver process to appear ...
	I0717 21:42:34.803565   23321 api_server.go:88] waiting for apiserver healthz status ...
	I0717 21:42:34.803582   23321 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I0717 21:42:34.810063   23321 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I0717 21:42:34.811776   23321 api_server.go:141] control plane version: v1.27.3
	I0717 21:42:34.811799   23321 api_server.go:131] duration metric: took 8.227404ms to wait for apiserver health ...
	I0717 21:42:34.811807   23321 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 21:42:34.819670   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:34.823539   23321 system_pods.go:59] 17 kube-system pods found
	I0717 21:42:34.823568   23321 system_pods.go:61] "coredns-5d78c9869d-t7knm" [eda2527f-7bae-434e-882b-2168797b2551] Running
	I0717 21:42:34.823574   23321 system_pods.go:61] "csi-hostpath-attacher-0" [a2df6586-68cb-4eff-9e7f-40393a4abbbd] Running
	I0717 21:42:34.823579   23321 system_pods.go:61] "csi-hostpath-resizer-0" [724d4c27-ecea-4c79-9f5a-63c2b06470dd] Running
	I0717 21:42:34.823586   23321 system_pods.go:61] "csi-hostpathplugin-v9d6d" [69506c07-b7c0-4aca-b5c5-dc39487d0985] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 21:42:34.823593   23321 system_pods.go:61] "etcd-addons-436248" [40eee58b-4ac3-48ea-94be-4d42c728d211] Running
	I0717 21:42:34.823598   23321 system_pods.go:61] "kube-apiserver-addons-436248" [4b21f54c-1485-4c37-9094-fb738fcfe458] Running
	I0717 21:42:34.823604   23321 system_pods.go:61] "kube-controller-manager-addons-436248" [7a69b3c4-bcf5-4b0c-960d-e2cef570cbfd] Running
	I0717 21:42:34.823611   23321 system_pods.go:61] "kube-ingress-dns-minikube" [77fff828-5c39-496b-a264-46ce3dbea30b] Running
	I0717 21:42:34.823618   23321 system_pods.go:61] "kube-proxy-sc8ph" [08451d11-5cf5-43ae-8c43-38a8bb5eb3b5] Running
	I0717 21:42:34.823625   23321 system_pods.go:61] "kube-scheduler-addons-436248" [cb4e8bec-b8a6-49f4-aacf-6db7f0a0b298] Running
	I0717 21:42:34.823632   23321 system_pods.go:61] "metrics-server-844d8db974-7bttp" [5a8b487f-451d-4f4a-9963-5a0a1498b248] Running
	I0717 21:42:34.823641   23321 system_pods.go:61] "registry-7lx77" [10227c49-9b69-4d98-a71d-d2255449d1fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 21:42:34.823662   23321 system_pods.go:61] "registry-proxy-j6mzk" [cac045cb-1481-4983-a628-954619436235] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 21:42:34.823681   23321 system_pods.go:61] "snapshot-controller-75bbb956b9-4mrtx" [55997754-b658-423e-b15d-e2546ad5fa5f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:42:34.823690   23321 system_pods.go:61] "snapshot-controller-75bbb956b9-5htqd" [ce64aedc-fc91-4783-8680-6e0c97df5013] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:42:34.823698   23321 system_pods.go:61] "storage-provisioner" [0c6ba8f3-a3c8-4bee-802b-d0808343eb88] Running
	I0717 21:42:34.823704   23321 system_pods.go:61] "tiller-deploy-6847666dc-8dj6l" [2e6dcebd-1b5f-43e2-a558-6b96996ffc39] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0717 21:42:34.823712   23321 system_pods.go:74] duration metric: took 11.900338ms to wait for pod list to return data ...
	I0717 21:42:34.823722   23321 default_sa.go:34] waiting for default service account to be created ...
	I0717 21:42:34.827051   23321 default_sa.go:45] found service account: "default"
	I0717 21:42:34.827072   23321 default_sa.go:55] duration metric: took 3.341412ms for default service account to be created ...
	I0717 21:42:34.827080   23321 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 21:42:34.835415   23321 system_pods.go:86] 17 kube-system pods found
	I0717 21:42:34.835443   23321 system_pods.go:89] "coredns-5d78c9869d-t7knm" [eda2527f-7bae-434e-882b-2168797b2551] Running
	I0717 21:42:34.835448   23321 system_pods.go:89] "csi-hostpath-attacher-0" [a2df6586-68cb-4eff-9e7f-40393a4abbbd] Running
	I0717 21:42:34.835453   23321 system_pods.go:89] "csi-hostpath-resizer-0" [724d4c27-ecea-4c79-9f5a-63c2b06470dd] Running
	I0717 21:42:34.835459   23321 system_pods.go:89] "csi-hostpathplugin-v9d6d" [69506c07-b7c0-4aca-b5c5-dc39487d0985] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 21:42:34.835466   23321 system_pods.go:89] "etcd-addons-436248" [40eee58b-4ac3-48ea-94be-4d42c728d211] Running
	I0717 21:42:34.835471   23321 system_pods.go:89] "kube-apiserver-addons-436248" [4b21f54c-1485-4c37-9094-fb738fcfe458] Running
	I0717 21:42:34.835475   23321 system_pods.go:89] "kube-controller-manager-addons-436248" [7a69b3c4-bcf5-4b0c-960d-e2cef570cbfd] Running
	I0717 21:42:34.835480   23321 system_pods.go:89] "kube-ingress-dns-minikube" [77fff828-5c39-496b-a264-46ce3dbea30b] Running
	I0717 21:42:34.835484   23321 system_pods.go:89] "kube-proxy-sc8ph" [08451d11-5cf5-43ae-8c43-38a8bb5eb3b5] Running
	I0717 21:42:34.835488   23321 system_pods.go:89] "kube-scheduler-addons-436248" [cb4e8bec-b8a6-49f4-aacf-6db7f0a0b298] Running
	I0717 21:42:34.835492   23321 system_pods.go:89] "metrics-server-844d8db974-7bttp" [5a8b487f-451d-4f4a-9963-5a0a1498b248] Running
	I0717 21:42:34.835497   23321 system_pods.go:89] "registry-7lx77" [10227c49-9b69-4d98-a71d-d2255449d1fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 21:42:34.835506   23321 system_pods.go:89] "registry-proxy-j6mzk" [cac045cb-1481-4983-a628-954619436235] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 21:42:34.835514   23321 system_pods.go:89] "snapshot-controller-75bbb956b9-4mrtx" [55997754-b658-423e-b15d-e2546ad5fa5f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:42:34.835521   23321 system_pods.go:89] "snapshot-controller-75bbb956b9-5htqd" [ce64aedc-fc91-4783-8680-6e0c97df5013] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:42:34.835530   23321 system_pods.go:89] "storage-provisioner" [0c6ba8f3-a3c8-4bee-802b-d0808343eb88] Running
	I0717 21:42:34.835538   23321 system_pods.go:89] "tiller-deploy-6847666dc-8dj6l" [2e6dcebd-1b5f-43e2-a558-6b96996ffc39] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0717 21:42:34.835545   23321 system_pods.go:126] duration metric: took 8.460983ms to wait for k8s-apps to be running ...
	I0717 21:42:34.835551   23321 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 21:42:34.835593   23321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:42:34.859957   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:34.861033   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:34.875558   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:34.880277   23321 system_svc.go:56] duration metric: took 44.717518ms WaitForService to wait for kubelet.
	I0717 21:42:34.880304   23321 kubeadm.go:581] duration metric: took 47.431828296s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 21:42:34.880326   23321 node_conditions.go:102] verifying NodePressure condition ...
	I0717 21:42:34.885314   23321 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 21:42:34.885351   23321 node_conditions.go:123] node cpu capacity is 2
	I0717 21:42:34.885366   23321 node_conditions.go:105] duration metric: took 5.034079ms to run NodePressure ...
	I0717 21:42:34.885379   23321 start.go:228] waiting for startup goroutines ...
	I0717 21:42:35.318383   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:35.359287   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:35.361713   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:35.385260   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:35.840443   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:35.899636   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:35.900178   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:35.904180   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:36.318182   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:36.360033   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:36.361027   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:36.373539   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:36.819265   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:36.860591   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:36.860792   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:36.883540   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:37.317302   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:37.358861   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:37.359319   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:37.378030   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:37.818018   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:37.859772   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:37.860728   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:37.873532   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:38.317435   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:38.359441   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:38.359510   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:38.372563   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:38.818871   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:38.858272   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:38.858607   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:38.885568   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:39.324600   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:39.359761   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:39.360269   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:39.373433   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:39.817619   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:39.862112   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:39.864606   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:39.877143   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:40.318294   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:40.361105   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:40.362224   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:40.371536   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:40.817320   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:40.859854   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:40.862184   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:40.879589   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:41.317843   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:41.527212   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:41.528589   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:41.528868   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:41.819070   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:41.859855   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:41.864676   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:41.876063   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:42.317418   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:42.360554   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:42.362159   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:42.376288   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:42.818058   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:42.861004   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:42.862339   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:42.877541   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:43.317642   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:43.361241   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:43.361852   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:43.379525   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:43.817822   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:43.859660   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:43.863969   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:43.876887   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:44.318322   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:44.359708   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:44.359916   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:44.373304   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:44.831574   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:44.861818   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:44.862589   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:44.875483   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:45.318213   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:45.360621   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:45.360833   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:45.371777   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:45.819258   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:45.864874   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:45.874179   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:45.882918   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:46.319947   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:46.360066   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:46.360440   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:46.373314   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:46.817598   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:46.858914   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:46.862701   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:46.880537   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:47.317998   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:47.359933   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:47.361630   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:47.374323   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:47.819895   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:47.858195   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:47.860452   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:47.878121   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:48.317241   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:48.360690   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:48.360942   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:48.373757   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:48.818113   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:48.858991   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:48.859222   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:48.873010   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:49.371620   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:49.386667   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:42:49.393308   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:49.405928   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:49.824455   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:49.865620   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:49.867610   23321 kapi.go:107] duration metric: took 53.553429052s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 21:42:49.890524   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:50.317698   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:50.361979   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:50.374256   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:50.819534   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:50.869063   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:50.884034   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:51.326347   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:51.359376   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:51.373705   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:51.819550   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:51.860013   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:51.873574   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:52.318846   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:52.358800   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:52.376079   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:52.828873   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:52.868166   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:52.877459   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:53.454545   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:53.454840   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:53.455004   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:53.837599   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:53.864384   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:53.882728   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:54.324525   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:54.358749   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:54.378834   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:54.823216   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:54.859549   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:54.876918   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:55.322778   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:55.359188   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:55.373274   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:55.817908   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:55.860048   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:55.873501   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:56.317473   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:56.359522   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:56.372838   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:56.817307   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:56.859045   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:56.873357   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:57.318628   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:57.359675   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:57.374384   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:57.820644   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:57.858838   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:57.875095   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:58.319371   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:58.358732   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:58.373337   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:58.819383   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:58.860236   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:58.875963   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:59.317936   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:59.358701   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:59.372217   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:42:59.818069   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:42:59.858444   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:42:59.871992   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:43:00.318021   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:00.360116   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:00.378949   23321 kapi.go:107] duration metric: took 1m3.04210482s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 21:43:00.819525   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:00.859055   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:01.329186   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:01.358896   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:01.817593   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:01.860224   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:02.317637   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:02.361825   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:02.818243   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:02.858758   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:03.323747   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:03.367928   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:03.817288   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:03.858854   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:04.460627   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:04.462122   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:04.826400   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:04.858593   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:05.325716   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:05.359644   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:05.817282   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:05.859076   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:06.317662   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:06.358964   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:06.817477   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:06.859232   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:07.317770   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:07.359194   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:07.822658   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:07.866392   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:08.318436   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:08.359108   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:08.817480   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:08.859027   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:09.323389   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:09.358908   23321 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:43:09.818597   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:09.861816   23321 kapi.go:107] duration metric: took 1m13.550835622s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 21:43:10.317118   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:10.817265   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:11.332889   23321 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:43:11.819062   23321 kapi.go:107] duration metric: took 1m11.547266985s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 21:43:11.821182   23321 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-436248 cluster.
	I0717 21:43:11.822749   23321 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 21:43:11.824188   23321 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 21:43:11.825766   23321 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, helm-tiller, default-storageclass, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0717 21:43:11.827251   23321 addons.go:502] enable addons completed in 1m24.975676116s: enabled=[cloud-spanner ingress-dns storage-provisioner inspektor-gadget helm-tiller default-storageclass metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0717 21:43:11.827288   23321 start.go:233] waiting for cluster config update ...
	I0717 21:43:11.827304   23321 start.go:242] writing updated cluster config ...
	I0717 21:43:11.827603   23321 ssh_runner.go:195] Run: rm -f paused
	I0717 21:43:11.881743   23321 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 21:43:11.883672   23321 out.go:177] * Done! kubectl is now configured to use "addons-436248" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 21:41:01 UTC, ends at Mon 2023-07-17 21:46:00 UTC. --
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.752668166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3601046e4209090bf4af69a020e5614ca9df5fa1687c9fbf19f59157c33c32a,PodSandboxId:728d83f036d0ddcd8ba14e02210b7c30f7715adb4657e47f5841f061e10041ee,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689630352008033616,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-9klcc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b543087-22f8-43c2-8644-dcd28a40610b,},Annotations:map[string]string{io.kubernetes.container.hash: a73edf38,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbb5cb12ecc8790439b84da110926e82b8d630125bc2970846fba1aa00c1e66,PodSandboxId:7fbc4c24a76042204f824e106c1026bedc407aa99b07a21713c70d89a8f2b2b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630212968682819,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c7f365e2-76da-44c0-8ea2-4faa3cb79d1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 89d8aa43,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a938ba7bdc52430ac0c1f5454fd43d76c5b826dcf92955f06a44d997e20b0c2,PodSandboxId:cbe0e409c3329f8bc6e831b79c63ecfcd473f11efea3304daf003d15a0af0ca0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689630199974898204,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-97mrg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 07438fa1-1bae-4ea9-b549-f6cef51a1865,},Annotations:map[string]string{io.kubernetes.container.hash: d706d80b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671ae5f855249d11c6f69d167556bf40b5aaab8acc14d3652c882ea5f8bfa1a1,PodSandboxId:b24c92118f9cd21a6adcfce1f9a3a7e1c527366f5319d9b41f3aaf889fd6b84d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689630191297774826,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-h6brp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 01d77230-5801-46a8-999d-772e93bbf437,},Annotations:map[string]string{io.kubernetes.container.hash: 9cdbf1c6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44728389da4be49a851b8b213b8a599d8eeebf65884e286bd4c791b46c1ea462,PodSandboxId:907b1c9a4e1c2194c9ddd25df45755f53cc61de10c84099414c8fb898f41cf41,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630169492070962,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b7z2z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8bf74879-0330-496d-a6b8-bbc7321b85c2,},Annotations:map[string]string{io.kubernetes.container.hash: f74c808b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3190c649bc3393e7c24d7fbfeb3623374a98bdf40e7ab1304e2f5ff3492784bd,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630158992312836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fde5a7adfc917d3d7ad032d039380389626127f418b2ce30909def05086b9e0,PodSandboxId:102967e9de599d3ac710e1957821c03f7183bac6e36fa433cff288337c24232a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630154033686658,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7n5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab0adbc0-ae28-492a-a2f7-04fc093145bd,},Annotations:map[string]string{io.kubernetes.container.hash: f73f178b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e367298a9b7d53b90707e176d3d438bb19f643de894b50f1e2dff1d2d577e5,PodSandboxId:abe2724c6fb7a42380073d88285167e4ef271d2c8b9fe50da42183c2fd1cd4b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689630129238358704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08451d11-5cf5-43ae-8c43-38a8bb5eb3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 64f5f02f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7335e3ffa97ad45c56516f5b489d5e9896a8319acd3a4c5707189fac89cdf713,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689630122944310021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d41845053c903c7826ba5ab5bc709dcb660152eed76a21bd870c2802278bce,PodSandboxId:a6c65c1e4ce2c8740143ae11e1f3c4f3f929dd64bc1386bcf7c0e661b9700c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689630115184305682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-t7knm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda2527f-7bae-434e-882b-2168797b2551,},Annotations:map[string]string{io.kubernetes.container.hash: 6fccd973,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c706133d070490c899f58774aedbe7688ea79c779f97ee4b2514b8548a24686e,PodSandboxId:f60ce686bd66c4827a4542614718145d34cd5a7e25d880531384250da6f9a6e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689630087780130806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8921e005c32f1a031e3e117b1d85366,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e326b6fdeba0864c4242c320498c8cfce391a355a32943aaf36b11a541aa0b,PodSandboxId:7b1a2794a23e064e5a6abc6d5801f11c79124d9b736d90b56a806ec11c00c9ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689630087592818341,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5034c03a1062d470740c9ba39e4948,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6d75527361c7649290c54937aef22ea028840b8b01894c24cbd6c9ffe55054,PodSandboxId:77edb0c2c02a98ec55a65073741306332fb11390a133eb400645308ea673e718,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&Image
Spec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689630087380013443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a713d45dbfe5c4b230509adae337504a,},Annotations:map[string]string{io.kubernetes.container.hash: f8df1e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bffccfed78d13ad791b825fcc28eb8151b85e876910b8b0db9f39eaa21fc48c,PodSandboxId:e375f9f3163abc7d97d32ac26f6ca77bcfadc8f449399814bf5df78bb64ac5ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a3
5ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689630087360339339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef23b6ff0f5060bf5ab9f91ac16489,},Annotations:map[string]string{io.kubernetes.container.hash: 5b2c31a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f5077ea3-2524-481c-a795-20a3a7922b24 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.791851619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f79def83-aa84-488b-94ee-af5fa7c085fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.791955349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f79def83-aa84-488b-94ee-af5fa7c085fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.792387898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3601046e4209090bf4af69a020e5614ca9df5fa1687c9fbf19f59157c33c32a,PodSandboxId:728d83f036d0ddcd8ba14e02210b7c30f7715adb4657e47f5841f061e10041ee,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689630352008033616,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-9klcc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b543087-22f8-43c2-8644-dcd28a40610b,},Annotations:map[string]string{io.kubernetes.container.hash: a73edf38,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbb5cb12ecc8790439b84da110926e82b8d630125bc2970846fba1aa00c1e66,PodSandboxId:7fbc4c24a76042204f824e106c1026bedc407aa99b07a21713c70d89a8f2b2b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630212968682819,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c7f365e2-76da-44c0-8ea2-4faa3cb79d1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 89d8aa43,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a938ba7bdc52430ac0c1f5454fd43d76c5b826dcf92955f06a44d997e20b0c2,PodSandboxId:cbe0e409c3329f8bc6e831b79c63ecfcd473f11efea3304daf003d15a0af0ca0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689630199974898204,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-97mrg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 07438fa1-1bae-4ea9-b549-f6cef51a1865,},Annotations:map[string]string{io.kubernetes.container.hash: d706d80b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671ae5f855249d11c6f69d167556bf40b5aaab8acc14d3652c882ea5f8bfa1a1,PodSandboxId:b24c92118f9cd21a6adcfce1f9a3a7e1c527366f5319d9b41f3aaf889fd6b84d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689630191297774826,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-h6brp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 01d77230-5801-46a8-999d-772e93bbf437,},Annotations:map[string]string{io.kubernetes.container.hash: 9cdbf1c6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44728389da4be49a851b8b213b8a599d8eeebf65884e286bd4c791b46c1ea462,PodSandboxId:907b1c9a4e1c2194c9ddd25df45755f53cc61de10c84099414c8fb898f41cf41,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630169492070962,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b7z2z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8bf74879-0330-496d-a6b8-bbc7321b85c2,},Annotations:map[string]string{io.kubernetes.container.hash: f74c808b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3190c649bc3393e7c24d7fbfeb3623374a98bdf40e7ab1304e2f5ff3492784bd,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630158992312836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fde5a7adfc917d3d7ad032d039380389626127f418b2ce30909def05086b9e0,PodSandboxId:102967e9de599d3ac710e1957821c03f7183bac6e36fa433cff288337c24232a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630154033686658,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7n5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab0adbc0-ae28-492a-a2f7-04fc093145bd,},Annotations:map[string]string{io.kubernetes.container.hash: f73f178b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e367298a9b7d53b90707e176d3d438bb19f643de894b50f1e2dff1d2d577e5,PodSandboxId:abe2724c6fb7a42380073d88285167e4ef271d2c8b9fe50da42183c2fd1cd4b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689630129238358704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08451d11-5cf5-43ae-8c43-38a8bb5eb3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 64f5f02f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7335e3ffa97ad45c56516f5b489d5e9896a8319acd3a4c5707189fac89cdf713,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689630122944310021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d41845053c903c7826ba5ab5bc709dcb660152eed76a21bd870c2802278bce,PodSandboxId:a6c65c1e4ce2c8740143ae11e1f3c4f3f929dd64bc1386bcf7c0e661b9700c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689630115184305682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-t7knm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda2527f-7bae-434e-882b-2168797b2551,},Annotations:map[string]string{io.kubernetes.container.hash: 6fccd973,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c706133d070490c899f58774aedbe7688ea79c779f97ee4b2514b8548a24686e,PodSandboxId:f60ce686bd66c4827a4542614718145d34cd5a7e25d880531384250da6f9a6e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689630087780130806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8921e005c32f1a031e3e117b1d85366,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e326b6fdeba0864c4242c320498c8cfce391a355a32943aaf36b11a541aa0b,PodSandboxId:7b1a2794a23e064e5a6abc6d5801f11c79124d9b736d90b56a806ec11c00c9ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689630087592818341,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5034c03a1062d470740c9ba39e4948,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6d75527361c7649290c54937aef22ea028840b8b01894c24cbd6c9ffe55054,PodSandboxId:77edb0c2c02a98ec55a65073741306332fb11390a133eb400645308ea673e718,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&Image
Spec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689630087380013443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a713d45dbfe5c4b230509adae337504a,},Annotations:map[string]string{io.kubernetes.container.hash: f8df1e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bffccfed78d13ad791b825fcc28eb8151b85e876910b8b0db9f39eaa21fc48c,PodSandboxId:e375f9f3163abc7d97d32ac26f6ca77bcfadc8f449399814bf5df78bb64ac5ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a3
5ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689630087360339339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef23b6ff0f5060bf5ab9f91ac16489,},Annotations:map[string]string{io.kubernetes.container.hash: 5b2c31a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f79def83-aa84-488b-94ee-af5fa7c085fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.825635090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f2477dfc-8a4e-4a04-9323-b8d1ee02f8fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.825737465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f2477dfc-8a4e-4a04-9323-b8d1ee02f8fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.826162377Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3601046e4209090bf4af69a020e5614ca9df5fa1687c9fbf19f59157c33c32a,PodSandboxId:728d83f036d0ddcd8ba14e02210b7c30f7715adb4657e47f5841f061e10041ee,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689630352008033616,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-9klcc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b543087-22f8-43c2-8644-dcd28a40610b,},Annotations:map[string]string{io.kubernetes.container.hash: a73edf38,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbb5cb12ecc8790439b84da110926e82b8d630125bc2970846fba1aa00c1e66,PodSandboxId:7fbc4c24a76042204f824e106c1026bedc407aa99b07a21713c70d89a8f2b2b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630212968682819,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c7f365e2-76da-44c0-8ea2-4faa3cb79d1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 89d8aa43,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a938ba7bdc52430ac0c1f5454fd43d76c5b826dcf92955f06a44d997e20b0c2,PodSandboxId:cbe0e409c3329f8bc6e831b79c63ecfcd473f11efea3304daf003d15a0af0ca0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689630199974898204,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-97mrg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 07438fa1-1bae-4ea9-b549-f6cef51a1865,},Annotations:map[string]string{io.kubernetes.container.hash: d706d80b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671ae5f855249d11c6f69d167556bf40b5aaab8acc14d3652c882ea5f8bfa1a1,PodSandboxId:b24c92118f9cd21a6adcfce1f9a3a7e1c527366f5319d9b41f3aaf889fd6b84d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689630191297774826,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-h6brp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 01d77230-5801-46a8-999d-772e93bbf437,},Annotations:map[string]string{io.kubernetes.container.hash: 9cdbf1c6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44728389da4be49a851b8b213b8a599d8eeebf65884e286bd4c791b46c1ea462,PodSandboxId:907b1c9a4e1c2194c9ddd25df45755f53cc61de10c84099414c8fb898f41cf41,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630169492070962,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b7z2z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8bf74879-0330-496d-a6b8-bbc7321b85c2,},Annotations:map[string]string{io.kubernetes.container.hash: f74c808b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3190c649bc3393e7c24d7fbfeb3623374a98bdf40e7ab1304e2f5ff3492784bd,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630158992312836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fde5a7adfc917d3d7ad032d039380389626127f418b2ce30909def05086b9e0,PodSandboxId:102967e9de599d3ac710e1957821c03f7183bac6e36fa433cff288337c24232a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630154033686658,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7n5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab0adbc0-ae28-492a-a2f7-04fc093145bd,},Annotations:map[string]string{io.kubernetes.container.hash: f73f178b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e367298a9b7d53b90707e176d3d438bb19f643de894b50f1e2dff1d2d577e5,PodSandboxId:abe2724c6fb7a42380073d88285167e4ef271d2c8b9fe50da42183c2fd1cd4b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689630129238358704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08451d11-5cf5-43ae-8c43-38a8bb5eb3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 64f5f02f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7335e3ffa97ad45c56516f5b489d5e9896a8319acd3a4c5707189fac89cdf713,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689630122944310021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d41845053c903c7826ba5ab5bc709dcb660152eed76a21bd870c2802278bce,PodSandboxId:a6c65c1e4ce2c8740143ae11e1f3c4f3f929dd64bc1386bcf7c0e661b9700c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689630115184305682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-t7knm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda2527f-7bae-434e-882b-2168797b2551,},Annotations:map[string]string{io.kubernetes.container.hash: 6fccd973,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c706133d070490c899f58774aedbe7688ea79c779f97ee4b2514b8548a24686e,PodSandboxId:f60ce686bd66c4827a4542614718145d34cd5a7e25d880531384250da6f9a6e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689630087780130806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8921e005c32f1a031e3e117b1d85366,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e326b6fdeba0864c4242c320498c8cfce391a355a32943aaf36b11a541aa0b,PodSandboxId:7b1a2794a23e064e5a6abc6d5801f11c79124d9b736d90b56a806ec11c00c9ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689630087592818341,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5034c03a1062d470740c9ba39e4948,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6d75527361c7649290c54937aef22ea028840b8b01894c24cbd6c9ffe55054,PodSandboxId:77edb0c2c02a98ec55a65073741306332fb11390a133eb400645308ea673e718,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&Image
Spec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689630087380013443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a713d45dbfe5c4b230509adae337504a,},Annotations:map[string]string{io.kubernetes.container.hash: f8df1e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bffccfed78d13ad791b825fcc28eb8151b85e876910b8b0db9f39eaa21fc48c,PodSandboxId:e375f9f3163abc7d97d32ac26f6ca77bcfadc8f449399814bf5df78bb64ac5ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a3
5ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689630087360339339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef23b6ff0f5060bf5ab9f91ac16489,},Annotations:map[string]string{io.kubernetes.container.hash: 5b2c31a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f2477dfc-8a4e-4a04-9323-b8d1ee02f8fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.866037804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fca5f426-3f4b-40b5-8e99-ec5c4c9e58a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.866142500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fca5f426-3f4b-40b5-8e99-ec5c4c9e58a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.866441659Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3601046e4209090bf4af69a020e5614ca9df5fa1687c9fbf19f59157c33c32a,PodSandboxId:728d83f036d0ddcd8ba14e02210b7c30f7715adb4657e47f5841f061e10041ee,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689630352008033616,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-9klcc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b543087-22f8-43c2-8644-dcd28a40610b,},Annotations:map[string]string{io.kubernetes.container.hash: a73edf38,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbb5cb12ecc8790439b84da110926e82b8d630125bc2970846fba1aa00c1e66,PodSandboxId:7fbc4c24a76042204f824e106c1026bedc407aa99b07a21713c70d89a8f2b2b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630212968682819,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c7f365e2-76da-44c0-8ea2-4faa3cb79d1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 89d8aa43,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a938ba7bdc52430ac0c1f5454fd43d76c5b826dcf92955f06a44d997e20b0c2,PodSandboxId:cbe0e409c3329f8bc6e831b79c63ecfcd473f11efea3304daf003d15a0af0ca0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689630199974898204,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-97mrg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 07438fa1-1bae-4ea9-b549-f6cef51a1865,},Annotations:map[string]string{io.kubernetes.container.hash: d706d80b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671ae5f855249d11c6f69d167556bf40b5aaab8acc14d3652c882ea5f8bfa1a1,PodSandboxId:b24c92118f9cd21a6adcfce1f9a3a7e1c527366f5319d9b41f3aaf889fd6b84d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689630191297774826,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-h6brp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 01d77230-5801-46a8-999d-772e93bbf437,},Annotations:map[string]string{io.kubernetes.container.hash: 9cdbf1c6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44728389da4be49a851b8b213b8a599d8eeebf65884e286bd4c791b46c1ea462,PodSandboxId:907b1c9a4e1c2194c9ddd25df45755f53cc61de10c84099414c8fb898f41cf41,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630169492070962,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b7z2z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8bf74879-0330-496d-a6b8-bbc7321b85c2,},Annotations:map[string]string{io.kubernetes.container.hash: f74c808b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3190c649bc3393e7c24d7fbfeb3623374a98bdf40e7ab1304e2f5ff3492784bd,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630158992312836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fde5a7adfc917d3d7ad032d039380389626127f418b2ce30909def05086b9e0,PodSandboxId:102967e9de599d3ac710e1957821c03f7183bac6e36fa433cff288337c24232a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630154033686658,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7n5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab0adbc0-ae28-492a-a2f7-04fc093145bd,},Annotations:map[string]string{io.kubernetes.container.hash: f73f178b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e367298a9b7d53b90707e176d3d438bb19f643de894b50f1e2dff1d2d577e5,PodSandboxId:abe2724c6fb7a42380073d88285167e4ef271d2c8b9fe50da42183c2fd1cd4b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689630129238358704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08451d11-5cf5-43ae-8c43-38a8bb5eb3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 64f5f02f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7335e3ffa97ad45c56516f5b489d5e9896a8319acd3a4c5707189fac89cdf713,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689630122944310021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d41845053c903c7826ba5ab5bc709dcb660152eed76a21bd870c2802278bce,PodSandboxId:a6c65c1e4ce2c8740143ae11e1f3c4f3f929dd64bc1386bcf7c0e661b9700c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689630115184305682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-t7knm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda2527f-7bae-434e-882b-2168797b2551,},Annotations:map[string]string{io.kubernetes.container.hash: 6fccd973,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c706133d070490c899f58774aedbe7688ea79c779f97ee4b2514b8548a24686e,PodSandboxId:f60ce686bd66c4827a4542614718145d34cd5a7e25d880531384250da6f9a6e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689630087780130806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8921e005c32f1a031e3e117b1d85366,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e326b6fdeba0864c4242c320498c8cfce391a355a32943aaf36b11a541aa0b,PodSandboxId:7b1a2794a23e064e5a6abc6d5801f11c79124d9b736d90b56a806ec11c00c9ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689630087592818341,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5034c03a1062d470740c9ba39e4948,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6d75527361c7649290c54937aef22ea028840b8b01894c24cbd6c9ffe55054,PodSandboxId:77edb0c2c02a98ec55a65073741306332fb11390a133eb400645308ea673e718,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&Image
Spec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689630087380013443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a713d45dbfe5c4b230509adae337504a,},Annotations:map[string]string{io.kubernetes.container.hash: f8df1e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bffccfed78d13ad791b825fcc28eb8151b85e876910b8b0db9f39eaa21fc48c,PodSandboxId:e375f9f3163abc7d97d32ac26f6ca77bcfadc8f449399814bf5df78bb64ac5ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a3
5ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689630087360339339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef23b6ff0f5060bf5ab9f91ac16489,},Annotations:map[string]string{io.kubernetes.container.hash: 5b2c31a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fca5f426-3f4b-40b5-8e99-ec5c4c9e58a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.916186200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6a020276-8e95-443c-8b41-c2a1ee7232d3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.916283078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6a020276-8e95-443c-8b41-c2a1ee7232d3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.916716066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3601046e4209090bf4af69a020e5614ca9df5fa1687c9fbf19f59157c33c32a,PodSandboxId:728d83f036d0ddcd8ba14e02210b7c30f7715adb4657e47f5841f061e10041ee,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689630352008033616,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-9klcc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b543087-22f8-43c2-8644-dcd28a40610b,},Annotations:map[string]string{io.kubernetes.container.hash: a73edf38,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbb5cb12ecc8790439b84da110926e82b8d630125bc2970846fba1aa00c1e66,PodSandboxId:7fbc4c24a76042204f824e106c1026bedc407aa99b07a21713c70d89a8f2b2b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630212968682819,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c7f365e2-76da-44c0-8ea2-4faa3cb79d1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 89d8aa43,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a938ba7bdc52430ac0c1f5454fd43d76c5b826dcf92955f06a44d997e20b0c2,PodSandboxId:cbe0e409c3329f8bc6e831b79c63ecfcd473f11efea3304daf003d15a0af0ca0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689630199974898204,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-97mrg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 07438fa1-1bae-4ea9-b549-f6cef51a1865,},Annotations:map[string]string{io.kubernetes.container.hash: d706d80b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671ae5f855249d11c6f69d167556bf40b5aaab8acc14d3652c882ea5f8bfa1a1,PodSandboxId:b24c92118f9cd21a6adcfce1f9a3a7e1c527366f5319d9b41f3aaf889fd6b84d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689630191297774826,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-h6brp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 01d77230-5801-46a8-999d-772e93bbf437,},Annotations:map[string]string{io.kubernetes.container.hash: 9cdbf1c6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44728389da4be49a851b8b213b8a599d8eeebf65884e286bd4c791b46c1ea462,PodSandboxId:907b1c9a4e1c2194c9ddd25df45755f53cc61de10c84099414c8fb898f41cf41,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630169492070962,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b7z2z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8bf74879-0330-496d-a6b8-bbc7321b85c2,},Annotations:map[string]string{io.kubernetes.container.hash: f74c808b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3190c649bc3393e7c24d7fbfeb3623374a98bdf40e7ab1304e2f5ff3492784bd,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630158992312836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fde5a7adfc917d3d7ad032d039380389626127f418b2ce30909def05086b9e0,PodSandboxId:102967e9de599d3ac710e1957821c03f7183bac6e36fa433cff288337c24232a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630154033686658,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7n5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab0adbc0-ae28-492a-a2f7-04fc093145bd,},Annotations:map[string]string{io.kubernetes.container.hash: f73f178b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e367298a9b7d53b90707e176d3d438bb19f643de894b50f1e2dff1d2d577e5,PodSandboxId:abe2724c6fb7a42380073d88285167e4ef271d2c8b9fe50da42183c2fd1cd4b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689630129238358704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08451d11-5cf5-43ae-8c43-38a8bb5eb3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 64f5f02f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7335e3ffa97ad45c56516f5b489d5e9896a8319acd3a4c5707189fac89cdf713,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689630122944310021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d41845053c903c7826ba5ab5bc709dcb660152eed76a21bd870c2802278bce,PodSandboxId:a6c65c1e4ce2c8740143ae11e1f3c4f3f929dd64bc1386bcf7c0e661b9700c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689630115184305682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-t7knm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda2527f-7bae-434e-882b-2168797b2551,},Annotations:map[string]string{io.kubernetes.container.hash: 6fccd973,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c706133d070490c899f58774aedbe7688ea79c779f97ee4b2514b8548a24686e,PodSandboxId:f60ce686bd66c4827a4542614718145d34cd5a7e25d880531384250da6f9a6e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689630087780130806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8921e005c32f1a031e3e117b1d85366,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e326b6fdeba0864c4242c320498c8cfce391a355a32943aaf36b11a541aa0b,PodSandboxId:7b1a2794a23e064e5a6abc6d5801f11c79124d9b736d90b56a806ec11c00c9ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689630087592818341,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5034c03a1062d470740c9ba39e4948,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6d75527361c7649290c54937aef22ea028840b8b01894c24cbd6c9ffe55054,PodSandboxId:77edb0c2c02a98ec55a65073741306332fb11390a133eb400645308ea673e718,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&Image
Spec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689630087380013443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a713d45dbfe5c4b230509adae337504a,},Annotations:map[string]string{io.kubernetes.container.hash: f8df1e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bffccfed78d13ad791b825fcc28eb8151b85e876910b8b0db9f39eaa21fc48c,PodSandboxId:e375f9f3163abc7d97d32ac26f6ca77bcfadc8f449399814bf5df78bb64ac5ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a3
5ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689630087360339339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef23b6ff0f5060bf5ab9f91ac16489,},Annotations:map[string]string{io.kubernetes.container.hash: 5b2c31a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6a020276-8e95-443c-8b41-c2a1ee7232d3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.951873961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=856cf1fb-bd27-42c9-8dc0-a49c5ec1ceda name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.951965787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=856cf1fb-bd27-42c9-8dc0-a49c5ec1ceda name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.952321429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3601046e4209090bf4af69a020e5614ca9df5fa1687c9fbf19f59157c33c32a,PodSandboxId:728d83f036d0ddcd8ba14e02210b7c30f7715adb4657e47f5841f061e10041ee,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689630352008033616,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-9klcc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b543087-22f8-43c2-8644-dcd28a40610b,},Annotations:map[string]string{io.kubernetes.container.hash: a73edf38,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbb5cb12ecc8790439b84da110926e82b8d630125bc2970846fba1aa00c1e66,PodSandboxId:7fbc4c24a76042204f824e106c1026bedc407aa99b07a21713c70d89a8f2b2b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630212968682819,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c7f365e2-76da-44c0-8ea2-4faa3cb79d1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 89d8aa43,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a938ba7bdc52430ac0c1f5454fd43d76c5b826dcf92955f06a44d997e20b0c2,PodSandboxId:cbe0e409c3329f8bc6e831b79c63ecfcd473f11efea3304daf003d15a0af0ca0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689630199974898204,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-97mrg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 07438fa1-1bae-4ea9-b549-f6cef51a1865,},Annotations:map[string]string{io.kubernetes.container.hash: d706d80b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671ae5f855249d11c6f69d167556bf40b5aaab8acc14d3652c882ea5f8bfa1a1,PodSandboxId:b24c92118f9cd21a6adcfce1f9a3a7e1c527366f5319d9b41f3aaf889fd6b84d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689630191297774826,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-h6brp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 01d77230-5801-46a8-999d-772e93bbf437,},Annotations:map[string]string{io.kubernetes.container.hash: 9cdbf1c6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44728389da4be49a851b8b213b8a599d8eeebf65884e286bd4c791b46c1ea462,PodSandboxId:907b1c9a4e1c2194c9ddd25df45755f53cc61de10c84099414c8fb898f41cf41,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630169492070962,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b7z2z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8bf74879-0330-496d-a6b8-bbc7321b85c2,},Annotations:map[string]string{io.kubernetes.container.hash: f74c808b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3190c649bc3393e7c24d7fbfeb3623374a98bdf40e7ab1304e2f5ff3492784bd,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630158992312836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fde5a7adfc917d3d7ad032d039380389626127f418b2ce30909def05086b9e0,PodSandboxId:102967e9de599d3ac710e1957821c03f7183bac6e36fa433cff288337c24232a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630154033686658,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7n5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab0adbc0-ae28-492a-a2f7-04fc093145bd,},Annotations:map[string]string{io.kubernetes.container.hash: f73f178b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e367298a9b7d53b90707e176d3d438bb19f643de894b50f1e2dff1d2d577e5,PodSandboxId:abe2724c6fb7a42380073d88285167e4ef271d2c8b9fe50da42183c2fd1cd4b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689630129238358704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08451d11-5cf5-43ae-8c43-38a8bb5eb3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 64f5f02f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7335e3ffa97ad45c56516f5b489d5e9896a8319acd3a4c5707189fac89cdf713,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689630122944310021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d41845053c903c7826ba5ab5bc709dcb660152eed76a21bd870c2802278bce,PodSandboxId:a6c65c1e4ce2c8740143ae11e1f3c4f3f929dd64bc1386bcf7c0e661b9700c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689630115184305682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-t7knm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda2527f-7bae-434e-882b-2168797b2551,},Annotations:map[string]string{io.kubernetes.container.hash: 6fccd973,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c706133d070490c899f58774aedbe7688ea79c779f97ee4b2514b8548a24686e,PodSandboxId:f60ce686bd66c4827a4542614718145d34cd5a7e25d880531384250da6f9a6e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689630087780130806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8921e005c32f1a031e3e117b1d85366,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e326b6fdeba0864c4242c320498c8cfce391a355a32943aaf36b11a541aa0b,PodSandboxId:7b1a2794a23e064e5a6abc6d5801f11c79124d9b736d90b56a806ec11c00c9ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689630087592818341,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5034c03a1062d470740c9ba39e4948,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6d75527361c7649290c54937aef22ea028840b8b01894c24cbd6c9ffe55054,PodSandboxId:77edb0c2c02a98ec55a65073741306332fb11390a133eb400645308ea673e718,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&Image
Spec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689630087380013443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a713d45dbfe5c4b230509adae337504a,},Annotations:map[string]string{io.kubernetes.container.hash: f8df1e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bffccfed78d13ad791b825fcc28eb8151b85e876910b8b0db9f39eaa21fc48c,PodSandboxId:e375f9f3163abc7d97d32ac26f6ca77bcfadc8f449399814bf5df78bb64ac5ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a3
5ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689630087360339339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef23b6ff0f5060bf5ab9f91ac16489,},Annotations:map[string]string{io.kubernetes.container.hash: 5b2c31a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=856cf1fb-bd27-42c9-8dc0-a49c5ec1ceda name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.985885506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=73c46c09-5f88-4df7-ae61-46a56005dc48 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.985979451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=73c46c09-5f88-4df7-ae61-46a56005dc48 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:45:59 addons-436248 crio[715]: time="2023-07-17 21:45:59.986270131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3601046e4209090bf4af69a020e5614ca9df5fa1687c9fbf19f59157c33c32a,PodSandboxId:728d83f036d0ddcd8ba14e02210b7c30f7715adb4657e47f5841f061e10041ee,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689630352008033616,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-9klcc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b543087-22f8-43c2-8644-dcd28a40610b,},Annotations:map[string]string{io.kubernetes.container.hash: a73edf38,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbb5cb12ecc8790439b84da110926e82b8d630125bc2970846fba1aa00c1e66,PodSandboxId:7fbc4c24a76042204f824e106c1026bedc407aa99b07a21713c70d89a8f2b2b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630212968682819,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c7f365e2-76da-44c0-8ea2-4faa3cb79d1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 89d8aa43,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a938ba7bdc52430ac0c1f5454fd43d76c5b826dcf92955f06a44d997e20b0c2,PodSandboxId:cbe0e409c3329f8bc6e831b79c63ecfcd473f11efea3304daf003d15a0af0ca0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689630199974898204,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-97mrg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 07438fa1-1bae-4ea9-b549-f6cef51a1865,},Annotations:map[string]string{io.kubernetes.container.hash: d706d80b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671ae5f855249d11c6f69d167556bf40b5aaab8acc14d3652c882ea5f8bfa1a1,PodSandboxId:b24c92118f9cd21a6adcfce1f9a3a7e1c527366f5319d9b41f3aaf889fd6b84d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689630191297774826,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-h6brp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 01d77230-5801-46a8-999d-772e93bbf437,},Annotations:map[string]string{io.kubernetes.container.hash: 9cdbf1c6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44728389da4be49a851b8b213b8a599d8eeebf65884e286bd4c791b46c1ea462,PodSandboxId:907b1c9a4e1c2194c9ddd25df45755f53cc61de10c84099414c8fb898f41cf41,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630169492070962,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b7z2z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8bf74879-0330-496d-a6b8-bbc7321b85c2,},Annotations:map[string]string{io.kubernetes.container.hash: f74c808b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3190c649bc3393e7c24d7fbfeb3623374a98bdf40e7ab1304e2f5ff3492784bd,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630158992312836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fde5a7adfc917d3d7ad032d039380389626127f418b2ce30909def05086b9e0,PodSandboxId:102967e9de599d3ac710e1957821c03f7183bac6e36fa433cff288337c24232a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630154033686658,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7n5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab0adbc0-ae28-492a-a2f7-04fc093145bd,},Annotations:map[string]string{io.kubernetes.container.hash: f73f178b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e367298a9b7d53b90707e176d3d438bb19f643de894b50f1e2dff1d2d577e5,PodSandboxId:abe2724c6fb7a42380073d88285167e4ef271d2c8b9fe50da42183c2fd1cd4b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689630129238358704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08451d11-5cf5-43ae-8c43-38a8bb5eb3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 64f5f02f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7335e3ffa97ad45c56516f5b489d5e9896a8319acd3a4c5707189fac89cdf713,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689630122944310021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d41845053c903c7826ba5ab5bc709dcb660152eed76a21bd870c2802278bce,PodSandboxId:a6c65c1e4ce2c8740143ae11e1f3c4f3f929dd64bc1386bcf7c0e661b9700c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689630115184305682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-t7knm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda2527f-7bae-434e-882b-2168797b2551,},Annotations:map[string]string{io.kubernetes.container.hash: 6fccd973,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c706133d070490c899f58774aedbe7688ea79c779f97ee4b2514b8548a24686e,PodSandboxId:f60ce686bd66c4827a4542614718145d34cd5a7e25d880531384250da6f9a6e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689630087780130806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8921e005c32f1a031e3e117b1d85366,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e326b6fdeba0864c4242c320498c8cfce391a355a32943aaf36b11a541aa0b,PodSandboxId:7b1a2794a23e064e5a6abc6d5801f11c79124d9b736d90b56a806ec11c00c9ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689630087592818341,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5034c03a1062d470740c9ba39e4948,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6d75527361c7649290c54937aef22ea028840b8b01894c24cbd6c9ffe55054,PodSandboxId:77edb0c2c02a98ec55a65073741306332fb11390a133eb400645308ea673e718,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&Image
Spec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689630087380013443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a713d45dbfe5c4b230509adae337504a,},Annotations:map[string]string{io.kubernetes.container.hash: f8df1e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bffccfed78d13ad791b825fcc28eb8151b85e876910b8b0db9f39eaa21fc48c,PodSandboxId:e375f9f3163abc7d97d32ac26f6ca77bcfadc8f449399814bf5df78bb64ac5ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a3
5ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689630087360339339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef23b6ff0f5060bf5ab9f91ac16489,},Annotations:map[string]string{io.kubernetes.container.hash: 5b2c31a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=73c46c09-5f88-4df7-ae61-46a56005dc48 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:46:00 addons-436248 crio[715]: time="2023-07-17 21:46:00.014256447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3e9ba52f-4d61-457e-8dc3-91f885226bd7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:46:00 addons-436248 crio[715]: time="2023-07-17 21:46:00.014347464Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3e9ba52f-4d61-457e-8dc3-91f885226bd7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:46:00 addons-436248 crio[715]: time="2023-07-17 21:46:00.014750055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3601046e4209090bf4af69a020e5614ca9df5fa1687c9fbf19f59157c33c32a,PodSandboxId:728d83f036d0ddcd8ba14e02210b7c30f7715adb4657e47f5841f061e10041ee,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689630352008033616,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-9klcc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b543087-22f8-43c2-8644-dcd28a40610b,},Annotations:map[string]string{io.kubernetes.container.hash: a73edf38,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbb5cb12ecc8790439b84da110926e82b8d630125bc2970846fba1aa00c1e66,PodSandboxId:7fbc4c24a76042204f824e106c1026bedc407aa99b07a21713c70d89a8f2b2b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630212968682819,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c7f365e2-76da-44c0-8ea2-4faa3cb79d1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 89d8aa43,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a938ba7bdc52430ac0c1f5454fd43d76c5b826dcf92955f06a44d997e20b0c2,PodSandboxId:cbe0e409c3329f8bc6e831b79c63ecfcd473f11efea3304daf003d15a0af0ca0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689630199974898204,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-97mrg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 07438fa1-1bae-4ea9-b549-f6cef51a1865,},Annotations:map[string]string{io.kubernetes.container.hash: d706d80b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671ae5f855249d11c6f69d167556bf40b5aaab8acc14d3652c882ea5f8bfa1a1,PodSandboxId:b24c92118f9cd21a6adcfce1f9a3a7e1c527366f5319d9b41f3aaf889fd6b84d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689630191297774826,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-h6brp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 01d77230-5801-46a8-999d-772e93bbf437,},Annotations:map[string]string{io.kubernetes.container.hash: 9cdbf1c6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44728389da4be49a851b8b213b8a599d8eeebf65884e286bd4c791b46c1ea462,PodSandboxId:907b1c9a4e1c2194c9ddd25df45755f53cc61de10c84099414c8fb898f41cf41,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630169492070962,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b7z2z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8bf74879-0330-496d-a6b8-bbc7321b85c2,},Annotations:map[string]string{io.kubernetes.container.hash: f74c808b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3190c649bc3393e7c24d7fbfeb3623374a98bdf40e7ab1304e2f5ff3492784bd,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630158992312836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fde5a7adfc917d3d7ad032d039380389626127f418b2ce30909def05086b9e0,PodSandboxId:102967e9de599d3ac710e1957821c03f7183bac6e36fa433cff288337c24232a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630154033686658,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7n5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab0adbc0-ae28-492a-a2f7-04fc093145bd,},Annotations:map[string]string{io.kubernetes.container.hash: f73f178b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e367298a9b7d53b90707e176d3d438bb19f643de894b50f1e2dff1d2d577e5,PodSandboxId:abe2724c6fb7a42380073d88285167e4ef271d2c8b9fe50da42183c2fd1cd4b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689630129238358704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08451d11-5cf5-43ae-8c43-38a8bb5eb3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 64f5f02f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7335e3ffa97ad45c56516f5b489d5e9896a8319acd3a4c5707189fac89cdf713,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689630122944310021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d41845053c903c7826ba5ab5bc709dcb660152eed76a21bd870c2802278bce,PodSandboxId:a6c65c1e4ce2c8740143ae11e1f3c4f3f929dd64bc1386bcf7c0e661b9700c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689630115184305682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-t7knm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda2527f-7bae-434e-882b-2168797b2551,},Annotations:map[string]string{io.kubernetes.container.hash: 6fccd973,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c706133d070490c899f58774aedbe7688ea79c779f97ee4b2514b8548a24686e,PodSandboxId:f60ce686bd66c4827a4542614718145d34cd5a7e25d880531384250da6f9a6e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689630087780130806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8921e005c32f1a031e3e117b1d85366,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e326b6fdeba0864c4242c320498c8cfce391a355a32943aaf36b11a541aa0b,PodSandboxId:7b1a2794a23e064e5a6abc6d5801f11c79124d9b736d90b56a806ec11c00c9ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689630087592818341,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5034c03a1062d470740c9ba39e4948,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6d75527361c7649290c54937aef22ea028840b8b01894c24cbd6c9ffe55054,PodSandboxId:77edb0c2c02a98ec55a65073741306332fb11390a133eb400645308ea673e718,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&Image
Spec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689630087380013443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a713d45dbfe5c4b230509adae337504a,},Annotations:map[string]string{io.kubernetes.container.hash: f8df1e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bffccfed78d13ad791b825fcc28eb8151b85e876910b8b0db9f39eaa21fc48c,PodSandboxId:e375f9f3163abc7d97d32ac26f6ca77bcfadc8f449399814bf5df78bb64ac5ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a3
5ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689630087360339339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef23b6ff0f5060bf5ab9f91ac16489,},Annotations:map[string]string{io.kubernetes.container.hash: 5b2c31a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3e9ba52f-4d61-457e-8dc3-91f885226bd7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:46:00 addons-436248 crio[715]: time="2023-07-17 21:46:00.051727339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0ef40001-3cde-4436-8e25-c8c9d0586ce5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:46:00 addons-436248 crio[715]: time="2023-07-17 21:46:00.051867098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0ef40001-3cde-4436-8e25-c8c9d0586ce5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:46:00 addons-436248 crio[715]: time="2023-07-17 21:46:00.052151815Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3601046e4209090bf4af69a020e5614ca9df5fa1687c9fbf19f59157c33c32a,PodSandboxId:728d83f036d0ddcd8ba14e02210b7c30f7715adb4657e47f5841f061e10041ee,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689630352008033616,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-65bdb79f98-9klcc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3b543087-22f8-43c2-8644-dcd28a40610b,},Annotations:map[string]string{io.kubernetes.container.hash: a73edf38,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dbb5cb12ecc8790439b84da110926e82b8d630125bc2970846fba1aa00c1e66,PodSandboxId:7fbc4c24a76042204f824e106c1026bedc407aa99b07a21713c70d89a8f2b2b7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630212968682819,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c7f365e2-76da-44c0-8ea2-4faa3cb79d1c,},Annotations:map[string]string{io.kubernet
es.container.hash: 89d8aa43,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a938ba7bdc52430ac0c1f5454fd43d76c5b826dcf92955f06a44d997e20b0c2,PodSandboxId:cbe0e409c3329f8bc6e831b79c63ecfcd473f11efea3304daf003d15a0af0ca0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45,State:CONTAINER_RUNNING,CreatedAt:1689630199974898204,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-66f6498c69-97mrg,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 07438fa1-1bae-4ea9-b549-f6cef51a1865,},Annotations:map[string]string{io.kubernetes.container.hash: d706d80b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671ae5f855249d11c6f69d167556bf40b5aaab8acc14d3652c882ea5f8bfa1a1,PodSandboxId:b24c92118f9cd21a6adcfce1f9a3a7e1c527366f5319d9b41f3aaf889fd6b84d,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1689630191297774826,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-58478865f7-h6brp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 01d77230-5801-46a8-999d-772e93bbf437,},Annotations:map[string]string{io.kubernetes.container.hash: 9cdbf1c6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44728389da4be49a851b8b213b8a599d8eeebf65884e286bd4c791b46c1ea462,PodSandboxId:907b1c9a4e1c2194c9ddd25df45755f53cc61de10c84099414c8fb898f41cf41,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdb
f80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630169492070962,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b7z2z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8bf74879-0330-496d-a6b8-bbc7321b85c2,},Annotations:map[string]string{io.kubernetes.container.hash: f74c808b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3190c649bc3393e7c24d7fbfeb3623374a98bdf40e7ab1304e2f5ff3492784bd,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630158992312836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fde5a7adfc917d3d7ad032d039380389626127f418b2ce30909def05086b9e0,PodSandboxId:102967e9de599d3ac710e1957821c03f7183bac6e36fa433cff288337c24232a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1689630154033686658,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-c7n5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ab0adbc0-ae28-492a-a2f7-04fc093145bd,},Annotations:map[string]string{io.kubernetes.container.hash: f73f178b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07e367298a9b7d53b90707e176d3d438bb19f643de894b50f1e2dff1d2d577e5,PodSandboxId:abe2724c6fb7a42380073d88285167e4ef271d2c8b9fe50da42183c2fd1cd4b2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8
428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689630129238358704,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08451d11-5cf5-43ae-8c43-38a8bb5eb3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 64f5f02f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7335e3ffa97ad45c56516f5b489d5e9896a8319acd3a4c5707189fac89cdf713,PodSandboxId:a0d7640bdbdb73df7b1ae4454030ea458f37f52fff577075f9e6372f21801856,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689630122944310021,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c6ba8f3-a3c8-4bee-802b-d0808343eb88,},Annotations:map[string]string{io.kubernetes.container.hash: a269ab10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d41845053c903c7826ba5ab5bc709dcb660152eed76a21bd870c2802278bce,PodSandboxId:a6c65c1e4ce2c8740143ae11e1f3c4f3f929dd64bc1386bcf7c0e661b9700c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43a
de8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689630115184305682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-t7knm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eda2527f-7bae-434e-882b-2168797b2551,},Annotations:map[string]string{io.kubernetes.container.hash: 6fccd973,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c706133d070490c899f58774aedbe7688ea79c779f97ee4b2514b8548a24686e,PodSandboxId:f60ce686bd66c4827a4542614718145d34cd5a7e25d880531384250da6f9a6e9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&I
mageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689630087780130806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8921e005c32f1a031e3e117b1d85366,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14e326b6fdeba0864c4242c320498c8cfce391a355a32943aaf36b11a541aa0b,PodSandboxId:7b1a2794a23e064e5a6abc6d5801f11c79124d9b736d90b56a806ec11c00c9ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSp
ec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689630087592818341,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf5034c03a1062d470740c9ba39e4948,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6d75527361c7649290c54937aef22ea028840b8b01894c24cbd6c9ffe55054,PodSandboxId:77edb0c2c02a98ec55a65073741306332fb11390a133eb400645308ea673e718,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&Image
Spec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689630087380013443,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a713d45dbfe5c4b230509adae337504a,},Annotations:map[string]string{io.kubernetes.container.hash: f8df1e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bffccfed78d13ad791b825fcc28eb8151b85e876910b8b0db9f39eaa21fc48c,PodSandboxId:e375f9f3163abc7d97d32ac26f6ca77bcfadc8f449399814bf5df78bb64ac5ce,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a3
5ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689630087360339339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-436248,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33ef23b6ff0f5060bf5ab9f91ac16489,},Annotations:map[string]string{io.kubernetes.container.hash: 5b2c31a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0ef40001-3cde-4436-8e25-c8c9d0586ce5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID
	f3601046e4209       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      8 seconds ago       Running             hello-world-app           0                   728d83f036d0d
	7dbb5cb12ecc8       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                              2 minutes ago       Running             nginx                     0                   7fbc4c24a7604
	0a938ba7bdc52       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                        2 minutes ago       Running             headlamp                  0                   cbe0e409c3329
	671ae5f855249       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   b24c92118f9cd
	44728389da4be       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   907b1c9a4e1c2
	3190c649bc339       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       1                   a0d7640bdbdb7
	5fde5a7adfc91       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   102967e9de599
	07e367298a9b7       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                                             3 minutes ago       Running             kube-proxy                0                   abe2724c6fb7a
	7335e3ffa97ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Exited              storage-provisioner       0                   a0d7640bdbdb7
	a9d41845053c9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   a6c65c1e4ce2c
	c706133d07049       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                                             4 minutes ago       Running             kube-scheduler            0                   f60ce686bd66c
	14e326b6fdeba       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                                             4 minutes ago       Running             kube-controller-manager   0                   7b1a2794a23e0
	1a6d75527361c       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   77edb0c2c02a9
	0bffccfed78d1       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                                             4 minutes ago       Running             kube-apiserver            0                   e375f9f3163ab
	
	* 
	* ==> coredns [a9d41845053c903c7826ba5ab5bc709dcb660152eed76a21bd870c2802278bce] <==
	* [INFO] 10.244.0.7:49304 - 54305 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000071185s
	[INFO] 10.244.0.7:33343 - 20658 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000049543s
	[INFO] 10.244.0.7:33343 - 11958 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000033476s
	[INFO] 10.244.0.7:55817 - 11464 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045772s
	[INFO] 10.244.0.7:55817 - 40375 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042573s
	[INFO] 10.244.0.7:52523 - 25067 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000277876s
	[INFO] 10.244.0.7:52523 - 49640 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000029064s
	[INFO] 10.244.0.7:57575 - 17095 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093399s
	[INFO] 10.244.0.7:57575 - 8899 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104185s
	[INFO] 10.244.0.7:48352 - 46100 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126084s
	[INFO] 10.244.0.7:48352 - 3476 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000236683s
	[INFO] 10.244.0.7:40931 - 21452 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046876s
	[INFO] 10.244.0.7:40931 - 24010 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043159s
	[INFO] 10.244.0.7:37349 - 46362 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000074372s
	[INFO] 10.244.0.7:37349 - 38148 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040594s
	[INFO] 10.244.0.19:39319 - 61785 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000469056s
	[INFO] 10.244.0.19:33964 - 19531 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000362015s
	[INFO] 10.244.0.19:34627 - 38332 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00024948s
	[INFO] 10.244.0.19:53893 - 25798 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147707s
	[INFO] 10.244.0.19:58651 - 13729 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072367s
	[INFO] 10.244.0.19:45888 - 22798 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014238s
	[INFO] 10.244.0.19:46358 - 49090 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002474821s
	[INFO] 10.244.0.19:55594 - 62706 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.002829219s
	[INFO] 10.244.0.21:47980 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000711836s
	[INFO] 10.244.0.21:59678 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000371839s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-436248
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-436248
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=addons-436248
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T21_41_35_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-436248
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 21:41:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-436248
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 21:45:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 21:44:08 +0000   Mon, 17 Jul 2023 21:41:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 21:44:08 +0000   Mon, 17 Jul 2023 21:41:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 21:44:08 +0000   Mon, 17 Jul 2023 21:41:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 21:44:08 +0000   Mon, 17 Jul 2023 21:41:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    addons-436248
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 1bef1f9ac22241c5a63f141d28d6ff59
	  System UUID:                1bef1f9a-c222-41c5-a63f-141d28d6ff59
	  Boot ID:                    a79259c4-e6a3-4daa-8650-0b94eda96feb
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-9klcc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-58478865f7-h6brp                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  headlamp                    headlamp-66f6498c69-97mrg                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-5d78c9869d-t7knm                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m13s
	  kube-system                 etcd-addons-436248                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-apiserver-addons-436248             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-controller-manager-addons-436248    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-proxy-sc8ph                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-scheduler-addons-436248             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m34s (x8 over 4m34s)  kubelet          Node addons-436248 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s (x8 over 4m34s)  kubelet          Node addons-436248 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s (x7 over 4m34s)  kubelet          Node addons-436248 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m25s                  kubelet          Node addons-436248 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s                  kubelet          Node addons-436248 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s                  kubelet          Node addons-436248 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m25s                  kubelet          Node addons-436248 status is now: NodeReady
	  Normal  RegisteredNode           4m14s                  node-controller  Node addons-436248 event: Registered Node addons-436248 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.448700] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul17 21:41] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135215] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.065947] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.464695] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.115625] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.157834] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.118953] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.225564] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +8.804952] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[  +9.262572] systemd-fstab-generator[1246]: Ignoring "noauto" for root device
	[ +20.386262] kauditd_printk_skb: 30 callbacks suppressed
	[Jul17 21:42] kauditd_printk_skb: 28 callbacks suppressed
	[ +27.466397] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.111085] kauditd_printk_skb: 16 callbacks suppressed
	[ +26.020994] kauditd_printk_skb: 6 callbacks suppressed
	[Jul17 21:43] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.381235] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.780282] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.526799] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.875787] kauditd_printk_skb: 5 callbacks suppressed
	[Jul17 21:44] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [1a6d75527361c7649290c54937aef22ea028840b8b01894c24cbd6c9ffe55054] <==
	* {"level":"info","ts":"2023-07-17T21:42:45.773Z","caller":"traceutil/trace.go:171","msg":"trace[550825758] transaction","detail":"{read_only:false; response_revision:945; number_of_response:1; }","duration":"104.194729ms","start":"2023-07-17T21:42:45.668Z","end":"2023-07-17T21:42:45.773Z","steps":["trace[550825758] 'process raft request'  (duration: 104.099263ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T21:42:53.433Z","caller":"traceutil/trace.go:171","msg":"trace[848145417] linearizableReadLoop","detail":"{readStateIndex:1025; appliedIndex:1024; }","duration":"121.353731ms","start":"2023-07-17T21:42:53.311Z","end":"2023-07-17T21:42:53.432Z","steps":["trace[848145417] 'read index received'  (duration: 121.211408ms)","trace[848145417] 'applied index is now lower than readState.Index'  (duration: 141.901µs)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T21:42:53.433Z","caller":"traceutil/trace.go:171","msg":"trace[220623433] transaction","detail":"{read_only:false; response_revision:995; number_of_response:1; }","duration":"281.265673ms","start":"2023-07-17T21:42:53.152Z","end":"2023-07-17T21:42:53.433Z","steps":["trace[220623433] 'process raft request'  (duration: 280.746036ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:42:53.433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.167614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10926"}
	{"level":"info","ts":"2023-07-17T21:42:53.433Z","caller":"traceutil/trace.go:171","msg":"trace[1593428995] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:995; }","duration":"122.335839ms","start":"2023-07-17T21:42:53.311Z","end":"2023-07-17T21:42:53.433Z","steps":["trace[1593428995] 'agreement among raft nodes before linearized reading'  (duration: 122.092164ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T21:43:04.446Z","caller":"traceutil/trace.go:171","msg":"trace[578047515] transaction","detail":"{read_only:false; response_revision:1047; number_of_response:1; }","duration":"453.393935ms","start":"2023-07-17T21:43:03.993Z","end":"2023-07-17T21:43:04.446Z","steps":["trace[578047515] 'process raft request'  (duration: 453.200311ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:43:04.447Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T21:43:03.993Z","time spent":"453.717812ms","remote":"127.0.0.1:37870","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1045 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-07-17T21:43:04.447Z","caller":"traceutil/trace.go:171","msg":"trace[1400905883] linearizableReadLoop","detail":"{readStateIndex:1080; appliedIndex:1080; }","duration":"136.707388ms","start":"2023-07-17T21:43:04.310Z","end":"2023-07-17T21:43:04.447Z","steps":["trace[1400905883] 'read index received'  (duration: 136.702595ms)","trace[1400905883] 'applied index is now lower than readState.Index'  (duration: 4.075µs)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T21:43:04.452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.766475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10936"}
	{"level":"info","ts":"2023-07-17T21:43:04.452Z","caller":"traceutil/trace.go:171","msg":"trace[384211854] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1047; }","duration":"141.858536ms","start":"2023-07-17T21:43:04.310Z","end":"2023-07-17T21:43:04.452Z","steps":["trace[384211854] 'agreement among raft nodes before linearized reading'  (duration: 136.769791ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:43:04.452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.038769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-07-17T21:43:04.452Z","caller":"traceutil/trace.go:171","msg":"trace[913649990] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:1048; }","duration":"123.104068ms","start":"2023-07-17T21:43:04.329Z","end":"2023-07-17T21:43:04.452Z","steps":["trace[913649990] 'agreement among raft nodes before linearized reading'  (duration: 122.941066ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T21:43:04.453Z","caller":"traceutil/trace.go:171","msg":"trace[1718979252] transaction","detail":"{read_only:false; response_revision:1048; number_of_response:1; }","duration":"103.144788ms","start":"2023-07-17T21:43:04.349Z","end":"2023-07-17T21:43:04.453Z","steps":["trace[1718979252] 'process raft request'  (duration: 99.64977ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:43:04.453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.979113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13844"}
	{"level":"info","ts":"2023-07-17T21:43:04.453Z","caller":"traceutil/trace.go:171","msg":"trace[1294603712] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1048; }","duration":"102.005888ms","start":"2023-07-17T21:43:04.351Z","end":"2023-07-17T21:43:04.453Z","steps":["trace[1294603712] 'agreement among raft nodes before linearized reading'  (duration: 101.925472ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T21:43:07.740Z","caller":"traceutil/trace.go:171","msg":"trace[521710542] transaction","detail":"{read_only:false; response_revision:1053; number_of_response:1; }","duration":"264.656521ms","start":"2023-07-17T21:43:07.475Z","end":"2023-07-17T21:43:07.740Z","steps":["trace[521710542] 'process raft request'  (duration: 264.440469ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T21:43:19.815Z","caller":"traceutil/trace.go:171","msg":"trace[917883595] transaction","detail":"{read_only:false; response_revision:1155; number_of_response:1; }","duration":"143.400457ms","start":"2023-07-17T21:43:19.672Z","end":"2023-07-17T21:43:19.815Z","steps":["trace[917883595] 'process raft request'  (duration: 143.293168ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T21:43:41.113Z","caller":"traceutil/trace.go:171","msg":"trace[1819114667] linearizableReadLoop","detail":"{readStateIndex:1387; appliedIndex:1386; }","duration":"307.844586ms","start":"2023-07-17T21:43:40.805Z","end":"2023-07-17T21:43:41.113Z","steps":["trace[1819114667] 'read index received'  (duration: 307.699734ms)","trace[1819114667] 'applied index is now lower than readState.Index'  (duration: 144.434µs)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T21:43:41.114Z","caller":"traceutil/trace.go:171","msg":"trace[482281474] transaction","detail":"{read_only:false; response_revision:1338; number_of_response:1; }","duration":"367.408198ms","start":"2023-07-17T21:43:40.746Z","end":"2023-07-17T21:43:41.114Z","steps":["trace[482281474] 'process raft request'  (duration: 367.065222ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:43:41.114Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T21:43:40.746Z","time spent":"367.54436ms","remote":"127.0.0.1:37870","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1331 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-07-17T21:43:41.114Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.585649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5638"}
	{"level":"info","ts":"2023-07-17T21:43:41.114Z","caller":"traceutil/trace.go:171","msg":"trace[1656357160] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1338; }","duration":"308.688837ms","start":"2023-07-17T21:43:40.805Z","end":"2023-07-17T21:43:41.114Z","steps":["trace[1656357160] 'agreement among raft nodes before linearized reading'  (duration: 308.449687ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:43:41.114Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T21:43:40.805Z","time spent":"308.739633ms","remote":"127.0.0.1:37874","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":5661,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2023-07-17T21:43:41.114Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.806527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-07-17T21:43:41.114Z","caller":"traceutil/trace.go:171","msg":"trace[1972408448] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:0; response_revision:1338; }","duration":"228.861721ms","start":"2023-07-17T21:43:40.886Z","end":"2023-07-17T21:43:41.114Z","steps":["trace[1972408448] 'agreement among raft nodes before linearized reading'  (duration: 228.778081ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [671ae5f855249d11c6f69d167556bf40b5aaab8acc14d3652c882ea5f8bfa1a1] <==
	* 2023/07/17 21:43:11 GCP Auth Webhook started!
	2023/07/17 21:43:13 Ready to marshal response ...
	2023/07/17 21:43:13 Ready to write response ...
	2023/07/17 21:43:13 Ready to marshal response ...
	2023/07/17 21:43:13 Ready to write response ...
	2023/07/17 21:43:13 http: TLS handshake error from 10.244.0.1:7300: EOF
	2023/07/17 21:43:13 Ready to marshal response ...
	2023/07/17 21:43:13 Ready to write response ...
	2023/07/17 21:43:22 Ready to marshal response ...
	2023/07/17 21:43:22 Ready to write response ...
	2023/07/17 21:43:23 Ready to marshal response ...
	2023/07/17 21:43:23 Ready to write response ...
	2023/07/17 21:43:29 Ready to marshal response ...
	2023/07/17 21:43:29 Ready to write response ...
	2023/07/17 21:43:38 Ready to marshal response ...
	2023/07/17 21:43:38 Ready to write response ...
	2023/07/17 21:44:11 Ready to marshal response ...
	2023/07/17 21:44:11 Ready to write response ...
	2023/07/17 21:45:49 Ready to marshal response ...
	2023/07/17 21:45:49 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:46:00 up 5 min,  0 users,  load average: 0.56, 1.60, 0.85
	Linux addons-436248 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0bffccfed78d13ad791b825fcc28eb8151b85e876910b8b0db9f39eaa21fc48c] <==
	* I0717 21:43:51.972772       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0717 21:44:26.382897       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:44:26.383020       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:44:26.401865       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:44:26.401966       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:44:26.483535       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:44:26.483615       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:44:26.502870       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:44:26.502980       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:44:26.553838       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:44:26.553912       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:44:26.575237       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:44:26.575339       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:44:26.598255       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:44:26.598323       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 21:44:27.484753       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 21:44:27.599121       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0717 21:44:27.600572       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E0717 21:44:35.400171       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0717 21:44:35.400262       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 21:44:35.400362       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 21:44:35.400397       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 21:45:49.507180       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.106.211.214]
	E0717 21:45:52.245717       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [14e326b6fdeba0864c4242c320498c8cfce391a355a32943aaf36b11a541aa0b] <==
	* E0717 21:44:46.322136       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 21:44:46.752790       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0717 21:44:46.752940       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 21:44:47.210681       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0717 21:44:47.210819       1 shared_informer.go:318] Caches are synced for garbage collector
	W0717 21:45:05.449427       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:45:05.449605       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:45:06.163888       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:45:06.164087       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:45:06.768041       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:45:06.768108       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:45:15.159419       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:45:15.159583       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:45:32.430617       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:45:32.430735       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:45:42.579977       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:45:42.580013       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:45:47.877123       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:45:47.877228       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 21:45:49.250187       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0717 21:45:49.302722       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-9klcc"
	I0717 21:45:52.165147       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0717 21:45:52.183600       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0717 21:45:54.917319       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:45:54.917432       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [07e367298a9b7d53b90707e176d3d438bb19f643de894b50f1e2dff1d2d577e5] <==
	* I0717 21:42:12.225966       1 node.go:141] Successfully retrieved node IP: 192.168.39.220
	I0717 21:42:12.227443       1 server_others.go:110] "Detected node IP" address="192.168.39.220"
	I0717 21:42:12.227570       1 server_others.go:554] "Using iptables proxy"
	I0717 21:42:12.264069       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 21:42:12.264115       1 server_others.go:192] "Using iptables Proxier"
	I0717 21:42:12.264147       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 21:42:12.264840       1 server.go:658] "Version info" version="v1.27.3"
	I0717 21:42:12.264850       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 21:42:12.265917       1 config.go:188] "Starting service config controller"
	I0717 21:42:12.265954       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 21:42:12.265971       1 config.go:97] "Starting endpoint slice config controller"
	I0717 21:42:12.265974       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 21:42:12.266372       1 config.go:315] "Starting node config controller"
	I0717 21:42:12.266379       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 21:42:12.367640       1 shared_informer.go:318] Caches are synced for node config
	I0717 21:42:12.367691       1 shared_informer.go:318] Caches are synced for service config
	I0717 21:42:12.367729       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c706133d070490c899f58774aedbe7688ea79c779f97ee4b2514b8548a24686e] <==
	* W0717 21:41:31.684251       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 21:41:31.684309       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 21:41:31.684606       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 21:41:31.684654       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 21:41:31.684706       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 21:41:31.685035       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 21:41:31.684794       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 21:41:31.685111       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 21:41:31.684834       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 21:41:31.685161       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 21:41:31.684613       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 21:41:31.685206       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 21:41:32.496175       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 21:41:32.496348       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 21:41:32.841288       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 21:41:32.841395       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 21:41:32.897565       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 21:41:32.897614       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 21:41:32.905392       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 21:41:32.905495       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 21:41:32.963311       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 21:41:32.963364       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 21:41:33.111667       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 21:41:33.111750       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 21:41:36.069631       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 21:41:01 UTC, ends at Mon 2023-07-17 21:46:00 UTC. --
	Jul 17 21:45:49 addons-436248 kubelet[1256]: I0717 21:45:49.469290    1256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3b543087-22f8-43c2-8644-dcd28a40610b-gcp-creds\") pod \"hello-world-app-65bdb79f98-9klcc\" (UID: \"3b543087-22f8-43c2-8644-dcd28a40610b\") " pod="default/hello-world-app-65bdb79f98-9klcc"
	Jul 17 21:45:49 addons-436248 kubelet[1256]: I0717 21:45:49.469342    1256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-769wx\" (UniqueName: \"kubernetes.io/projected/3b543087-22f8-43c2-8644-dcd28a40610b-kube-api-access-769wx\") pod \"hello-world-app-65bdb79f98-9klcc\" (UID: \"3b543087-22f8-43c2-8644-dcd28a40610b\") " pod="default/hello-world-app-65bdb79f98-9klcc"
	Jul 17 21:45:50 addons-436248 kubelet[1256]: I0717 21:45:50.778640    1256 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvdz5\" (UniqueName: \"kubernetes.io/projected/77fff828-5c39-496b-a264-46ce3dbea30b-kube-api-access-tvdz5\") pod \"77fff828-5c39-496b-a264-46ce3dbea30b\" (UID: \"77fff828-5c39-496b-a264-46ce3dbea30b\") "
	Jul 17 21:45:50 addons-436248 kubelet[1256]: I0717 21:45:50.781175    1256 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77fff828-5c39-496b-a264-46ce3dbea30b-kube-api-access-tvdz5" (OuterVolumeSpecName: "kube-api-access-tvdz5") pod "77fff828-5c39-496b-a264-46ce3dbea30b" (UID: "77fff828-5c39-496b-a264-46ce3dbea30b"). InnerVolumeSpecName "kube-api-access-tvdz5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 21:45:50 addons-436248 kubelet[1256]: I0717 21:45:50.879299    1256 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tvdz5\" (UniqueName: \"kubernetes.io/projected/77fff828-5c39-496b-a264-46ce3dbea30b-kube-api-access-tvdz5\") on node \"addons-436248\" DevicePath \"\""
	Jul 17 21:45:51 addons-436248 kubelet[1256]: I0717 21:45:51.203759    1256 scope.go:115] "RemoveContainer" containerID="1d480288dede97794ca17fa1ae999a9e0fe75012fdc7d0ca16065d2e77969a17"
	Jul 17 21:45:51 addons-436248 kubelet[1256]: I0717 21:45:51.284532    1256 scope.go:115] "RemoveContainer" containerID="1d480288dede97794ca17fa1ae999a9e0fe75012fdc7d0ca16065d2e77969a17"
	Jul 17 21:45:51 addons-436248 kubelet[1256]: E0717 21:45:51.286695    1256 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d480288dede97794ca17fa1ae999a9e0fe75012fdc7d0ca16065d2e77969a17\": container with ID starting with 1d480288dede97794ca17fa1ae999a9e0fe75012fdc7d0ca16065d2e77969a17 not found: ID does not exist" containerID="1d480288dede97794ca17fa1ae999a9e0fe75012fdc7d0ca16065d2e77969a17"
	Jul 17 21:45:51 addons-436248 kubelet[1256]: I0717 21:45:51.286737    1256 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:1d480288dede97794ca17fa1ae999a9e0fe75012fdc7d0ca16065d2e77969a17} err="failed to get container status \"1d480288dede97794ca17fa1ae999a9e0fe75012fdc7d0ca16065d2e77969a17\": rpc error: code = NotFound desc = could not find container \"1d480288dede97794ca17fa1ae999a9e0fe75012fdc7d0ca16065d2e77969a17\": container with ID starting with 1d480288dede97794ca17fa1ae999a9e0fe75012fdc7d0ca16065d2e77969a17 not found: ID does not exist"
	Jul 17 21:45:51 addons-436248 kubelet[1256]: I0717 21:45:51.317349    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=77fff828-5c39-496b-a264-46ce3dbea30b path="/var/lib/kubelet/pods/77fff828-5c39-496b-a264-46ce3dbea30b/volumes"
	Jul 17 21:45:52 addons-436248 kubelet[1256]: E0717 21:45:52.214994    1256 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-fw75f.1772c5da70a65af1", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-fw75f", UID:"c51279bc-9133-45c7-9eae-136f9909ee83", APIVersion:"v1", ResourceVersion:"667", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-436248"}, FirstTimestamp:time.Date(2023, time.July, 17, 21, 45, 52, 200850161, time.Local), LastTimestamp:time.Date(2023, time.July, 17, 21, 45, 52, 200850161, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-fw75f.1772c5da70a65af1" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 21:45:53 addons-436248 kubelet[1256]: I0717 21:45:53.316203    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=8bf74879-0330-496d-a6b8-bbc7321b85c2 path="/var/lib/kubelet/pods/8bf74879-0330-496d-a6b8-bbc7321b85c2/volumes"
	Jul 17 21:45:53 addons-436248 kubelet[1256]: I0717 21:45:53.317274    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=ab0adbc0-ae28-492a-a2f7-04fc093145bd path="/var/lib/kubelet/pods/ab0adbc0-ae28-492a-a2f7-04fc093145bd/volumes"
	Jul 17 21:45:53 addons-436248 kubelet[1256]: I0717 21:45:53.598727    1256 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c51279bc-9133-45c7-9eae-136f9909ee83-webhook-cert\") pod \"c51279bc-9133-45c7-9eae-136f9909ee83\" (UID: \"c51279bc-9133-45c7-9eae-136f9909ee83\") "
	Jul 17 21:45:53 addons-436248 kubelet[1256]: I0717 21:45:53.598818    1256 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zntx\" (UniqueName: \"kubernetes.io/projected/c51279bc-9133-45c7-9eae-136f9909ee83-kube-api-access-4zntx\") pod \"c51279bc-9133-45c7-9eae-136f9909ee83\" (UID: \"c51279bc-9133-45c7-9eae-136f9909ee83\") "
	Jul 17 21:45:53 addons-436248 kubelet[1256]: I0717 21:45:53.602922    1256 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51279bc-9133-45c7-9eae-136f9909ee83-kube-api-access-4zntx" (OuterVolumeSpecName: "kube-api-access-4zntx") pod "c51279bc-9133-45c7-9eae-136f9909ee83" (UID: "c51279bc-9133-45c7-9eae-136f9909ee83"). InnerVolumeSpecName "kube-api-access-4zntx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 21:45:53 addons-436248 kubelet[1256]: I0717 21:45:53.603937    1256 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51279bc-9133-45c7-9eae-136f9909ee83-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c51279bc-9133-45c7-9eae-136f9909ee83" (UID: "c51279bc-9133-45c7-9eae-136f9909ee83"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 21:45:53 addons-436248 kubelet[1256]: I0717 21:45:53.699311    1256 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c51279bc-9133-45c7-9eae-136f9909ee83-webhook-cert\") on node \"addons-436248\" DevicePath \"\""
	Jul 17 21:45:53 addons-436248 kubelet[1256]: I0717 21:45:53.699381    1256 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4zntx\" (UniqueName: \"kubernetes.io/projected/c51279bc-9133-45c7-9eae-136f9909ee83-kube-api-access-4zntx\") on node \"addons-436248\" DevicePath \"\""
	Jul 17 21:45:54 addons-436248 kubelet[1256]: I0717 21:45:54.226712    1256 scope.go:115] "RemoveContainer" containerID="920b747e05ebaffc5796ed7d6d8216c801de7462b1d93bd1272f20e346465464"
	Jul 17 21:45:54 addons-436248 kubelet[1256]: I0717 21:45:54.266095    1256 scope.go:115] "RemoveContainer" containerID="920b747e05ebaffc5796ed7d6d8216c801de7462b1d93bd1272f20e346465464"
	Jul 17 21:45:54 addons-436248 kubelet[1256]: E0717 21:45:54.266792    1256 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"920b747e05ebaffc5796ed7d6d8216c801de7462b1d93bd1272f20e346465464\": container with ID starting with 920b747e05ebaffc5796ed7d6d8216c801de7462b1d93bd1272f20e346465464 not found: ID does not exist" containerID="920b747e05ebaffc5796ed7d6d8216c801de7462b1d93bd1272f20e346465464"
	Jul 17 21:45:54 addons-436248 kubelet[1256]: I0717 21:45:54.266829    1256 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:920b747e05ebaffc5796ed7d6d8216c801de7462b1d93bd1272f20e346465464} err="failed to get container status \"920b747e05ebaffc5796ed7d6d8216c801de7462b1d93bd1272f20e346465464\": rpc error: code = NotFound desc = could not find container \"920b747e05ebaffc5796ed7d6d8216c801de7462b1d93bd1272f20e346465464\": container with ID starting with 920b747e05ebaffc5796ed7d6d8216c801de7462b1d93bd1272f20e346465464 not found: ID does not exist"
	Jul 17 21:45:55 addons-436248 kubelet[1256]: I0717 21:45:55.315948    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=c51279bc-9133-45c7-9eae-136f9909ee83 path="/var/lib/kubelet/pods/c51279bc-9133-45c7-9eae-136f9909ee83/volumes"
	Jul 17 21:45:59 addons-436248 kubelet[1256]: I0717 21:45:59.314737    1256 kubelet_pods.go:894] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-5d78c9869d-t7knm" secret="" err="secret \"gcp-auth\" not found"
	
	* 
	* ==> storage-provisioner [3190c649bc3393e7c24d7fbfeb3623374a98bdf40e7ab1304e2f5ff3492784bd] <==
	* I0717 21:42:39.265076       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 21:42:39.287578       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 21:42:39.287647       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 21:42:39.313553       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 21:42:39.313717       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-436248_e23fb593-e16d-42ff-8be7-01d973d70b3a!
	I0717 21:42:39.321774       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"71e4269e-98d9-466b-b120-7c1197b0f1c7", APIVersion:"v1", ResourceVersion:"913", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-436248_e23fb593-e16d-42ff-8be7-01d973d70b3a became leader
	I0717 21:42:39.413915       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-436248_e23fb593-e16d-42ff-8be7-01d973d70b3a!
	
	* 
	* ==> storage-provisioner [7335e3ffa97ad45c56516f5b489d5e9896a8319acd3a4c5707189fac89cdf713] <==
	* I0717 21:42:06.076177       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 21:42:36.093939       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-436248 -n addons-436248
helpers_test.go:261: (dbg) Run:  kubectl --context addons-436248 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.77s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-436248
addons_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-436248: exit status 82 (2m0.902054187s)

                                                
                                                
-- stdout --
	* Stopping node "addons-436248"  ...
	* Stopping node "addons-436248"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:150: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-436248" : exit status 82
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-436248
addons_test.go:152: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-436248: exit status 11 (21.578590063s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:154: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-436248" : exit status 11
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-436248
addons_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-436248: exit status 11 (6.144029374s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:158: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-436248" : exit status 11
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-436248
addons_test.go:161: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-436248: exit status 11 (6.143424321s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:163: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-436248" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.77s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-linux-amd64 license: exit status 40 (99.934849ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (170.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-480151 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-480151 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.139277899s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-480151 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-480151 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1573b30b-a311-4799-8a91-b4d776ee3681] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1573b30b-a311-4799-8a91-b4d776ee3681] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.025437083s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-480151 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0717 21:55:55.737669   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:57:28.102030   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:57:28.107364   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:57:28.117605   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:57:28.137898   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:57:28.178223   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:57:28.258564   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:57:28.419039   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:57:28.739761   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:57:29.380694   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:57:30.661184   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:57:33.222069   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:57:38.342719   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:57:48.583022   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-480151 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.440868021s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-480151 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-480151 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.29
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-480151 addons disable ingress-dns --alsologtostderr -v=1
E0717 21:58:09.064022   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 21:58:11.892311   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-480151 addons disable ingress-dns --alsologtostderr -v=1: (12.616950004s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-480151 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-480151 addons disable ingress --alsologtostderr -v=1: (7.558115383s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-480151 -n ingress-addon-legacy-480151
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-480151 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-480151 logs -n 25: (1.043740166s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-767593 ssh findmnt        | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| mount          | -p functional-767593                 | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| ssh            | functional-767593 ssh echo           | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | hello                                |                             |         |         |                     |                     |
	| ssh            | functional-767593 ssh cat            | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | /etc/hostname                        |                             |         |         |                     |                     |
	| addons         | functional-767593 addons list        | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	| addons         | functional-767593 addons list        | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | -o json                              |                             |         |         |                     |                     |
	| update-context | functional-767593                    | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-767593                    | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-767593                    | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-767593                    | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-767593                    | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-767593 ssh pgrep          | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-767593 image build -t     | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | localhost/my-image:functional-767593 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-767593 image ls           | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	| image          | functional-767593                    | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-767593                    | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| service        | functional-767593 service            | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	|                | hello-node-connect --url             |                             |         |         |                     |                     |
	| delete         | -p functional-767593                 | functional-767593           | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:53 UTC |
	| start          | -p ingress-addon-legacy-480151       | ingress-addon-legacy-480151 | jenkins | v1.31.0 | 17 Jul 23 21:53 UTC | 17 Jul 23 21:55 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-480151          | ingress-addon-legacy-480151 | jenkins | v1.31.0 | 17 Jul 23 21:55 UTC | 17 Jul 23 21:55 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-480151          | ingress-addon-legacy-480151 | jenkins | v1.31.0 | 17 Jul 23 21:55 UTC | 17 Jul 23 21:55 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-480151          | ingress-addon-legacy-480151 | jenkins | v1.31.0 | 17 Jul 23 21:55 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-480151 ip       | ingress-addon-legacy-480151 | jenkins | v1.31.0 | 17 Jul 23 21:58 UTC | 17 Jul 23 21:58 UTC |
	| addons         | ingress-addon-legacy-480151          | ingress-addon-legacy-480151 | jenkins | v1.31.0 | 17 Jul 23 21:58 UTC | 17 Jul 23 21:58 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-480151          | ingress-addon-legacy-480151 | jenkins | v1.31.0 | 17 Jul 23 21:58 UTC | 17 Jul 23 21:58 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:53:29
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:53:29.409913   30805 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:53:29.410040   30805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:53:29.410049   30805 out.go:309] Setting ErrFile to fd 2...
	I0717 21:53:29.410054   30805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:53:29.410249   30805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 21:53:29.410902   30805 out.go:303] Setting JSON to false
	I0717 21:53:29.411782   30805 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5761,"bootTime":1689625048,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:53:29.411846   30805 start.go:138] virtualization: kvm guest
	I0717 21:53:29.414044   30805 out.go:177] * [ingress-addon-legacy-480151] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:53:29.415794   30805 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 21:53:29.415777   30805 notify.go:220] Checking for updates...
	I0717 21:53:29.417070   30805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:53:29.418594   30805 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 21:53:29.420142   30805 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 21:53:29.421626   30805 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 21:53:29.422846   30805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:53:29.424300   30805 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:53:29.465150   30805 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 21:53:29.466447   30805 start.go:298] selected driver: kvm2
	I0717 21:53:29.466459   30805 start.go:880] validating driver "kvm2" against <nil>
	I0717 21:53:29.466473   30805 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:53:29.467370   30805 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:53:29.467457   30805 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 21:53:29.482019   30805 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 21:53:29.482061   30805 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:53:29.482228   30805 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 21:53:29.482254   30805 cni.go:84] Creating CNI manager for ""
	I0717 21:53:29.482262   30805 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 21:53:29.482269   30805 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 21:53:29.482277   30805 start_flags.go:319] config:
	{Name:ingress-addon-legacy-480151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-480151 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:53:29.482384   30805 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:53:29.484224   30805 out.go:177] * Starting control plane node ingress-addon-legacy-480151 in cluster ingress-addon-legacy-480151
	I0717 21:53:29.485677   30805 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 21:53:29.508509   30805 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0717 21:53:29.508539   30805 cache.go:57] Caching tarball of preloaded images
	I0717 21:53:29.508723   30805 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 21:53:29.510555   30805 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0717 21:53:29.511903   30805 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0717 21:53:29.537033   30805 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0717 21:53:32.666341   30805 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0717 21:53:32.666453   30805 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0717 21:53:33.661452   30805 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0717 21:53:33.661773   30805 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/config.json ...
	I0717 21:53:33.661801   30805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/config.json: {Name:mka63df9ea8f5dc7fcfe5e4d70cff4718f397449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:53:33.661954   30805 start.go:365] acquiring machines lock for ingress-addon-legacy-480151: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 21:53:33.661985   30805 start.go:369] acquired machines lock for "ingress-addon-legacy-480151" in 17.063µs
	I0717 21:53:33.661999   30805 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-480151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName
:ingress-addon-legacy-480151 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 21:53:33.662081   30805 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 21:53:33.664267   30805 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0717 21:53:33.664436   30805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:53:33.664479   30805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:53:33.678622   30805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46409
	I0717 21:53:33.679105   30805 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:53:33.679775   30805 main.go:141] libmachine: Using API Version  1
	I0717 21:53:33.679802   30805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:53:33.680148   30805 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:53:33.680323   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetMachineName
	I0717 21:53:33.680463   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .DriverName
	I0717 21:53:33.680609   30805 start.go:159] libmachine.API.Create for "ingress-addon-legacy-480151" (driver="kvm2")
	I0717 21:53:33.680663   30805 client.go:168] LocalClient.Create starting
	I0717 21:53:33.680691   30805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem
	I0717 21:53:33.680723   30805 main.go:141] libmachine: Decoding PEM data...
	I0717 21:53:33.680742   30805 main.go:141] libmachine: Parsing certificate...
	I0717 21:53:33.680800   30805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem
	I0717 21:53:33.680819   30805 main.go:141] libmachine: Decoding PEM data...
	I0717 21:53:33.680830   30805 main.go:141] libmachine: Parsing certificate...
	I0717 21:53:33.680850   30805 main.go:141] libmachine: Running pre-create checks...
	I0717 21:53:33.680863   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .PreCreateCheck
	I0717 21:53:33.681181   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetConfigRaw
	I0717 21:53:33.681618   30805 main.go:141] libmachine: Creating machine...
	I0717 21:53:33.681633   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .Create
	I0717 21:53:33.681759   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Creating KVM machine...
	I0717 21:53:33.683089   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found existing default KVM network
	I0717 21:53:33.683765   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:33.683641   30839 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f110}
	I0717 21:53:33.689098   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | trying to create private KVM network mk-ingress-addon-legacy-480151 192.168.39.0/24...
	I0717 21:53:33.758816   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Setting up store path in /home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151 ...
	I0717 21:53:33.758850   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | private KVM network mk-ingress-addon-legacy-480151 192.168.39.0/24 created
	I0717 21:53:33.758864   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Building disk image from file:///home/jenkins/minikube-integration/16899-15759/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 21:53:33.758877   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:33.758712   30839 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 21:53:33.758920   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Downloading /home/jenkins/minikube-integration/16899-15759/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16899-15759/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 21:53:33.957795   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:33.957668   30839 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151/id_rsa...
	I0717 21:53:34.099587   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:34.099456   30839 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151/ingress-addon-legacy-480151.rawdisk...
	I0717 21:53:34.099628   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Writing magic tar header
	I0717 21:53:34.099642   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Writing SSH key tar header
	I0717 21:53:34.099652   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:34.099589   30839 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151 ...
	I0717 21:53:34.099726   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151
	I0717 21:53:34.099752   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151 (perms=drwx------)
	I0717 21:53:34.099764   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube/machines
	I0717 21:53:34.099778   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube/machines (perms=drwxr-xr-x)
	I0717 21:53:34.099795   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube (perms=drwxr-xr-x)
	I0717 21:53:34.099806   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759 (perms=drwxrwxr-x)
	I0717 21:53:34.099819   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 21:53:34.099831   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 21:53:34.099841   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 21:53:34.099851   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Creating domain...
	I0717 21:53:34.099858   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759
	I0717 21:53:34.099866   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 21:53:34.099876   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Checking permissions on dir: /home/jenkins
	I0717 21:53:34.099892   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Checking permissions on dir: /home
	I0717 21:53:34.099906   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Skipping /home - not owner
	I0717 21:53:34.101042   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) define libvirt domain using xml: 
	I0717 21:53:34.101081   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) <domain type='kvm'>
	I0717 21:53:34.101095   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)   <name>ingress-addon-legacy-480151</name>
	I0717 21:53:34.101111   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)   <memory unit='MiB'>4096</memory>
	I0717 21:53:34.101158   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)   <vcpu>2</vcpu>
	I0717 21:53:34.101187   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)   <features>
	I0717 21:53:34.101198   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <acpi/>
	I0717 21:53:34.101203   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <apic/>
	I0717 21:53:34.101212   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <pae/>
	I0717 21:53:34.101217   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     
	I0717 21:53:34.101223   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)   </features>
	I0717 21:53:34.101231   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)   <cpu mode='host-passthrough'>
	I0717 21:53:34.101238   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)   
	I0717 21:53:34.101245   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)   </cpu>
	I0717 21:53:34.101252   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)   <os>
	I0717 21:53:34.101266   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <type>hvm</type>
	I0717 21:53:34.101276   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <boot dev='cdrom'/>
	I0717 21:53:34.101285   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <boot dev='hd'/>
	I0717 21:53:34.101292   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <bootmenu enable='no'/>
	I0717 21:53:34.101301   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)   </os>
	I0717 21:53:34.101307   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)   <devices>
	I0717 21:53:34.101313   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <disk type='file' device='cdrom'>
	I0717 21:53:34.101323   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <source file='/home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151/boot2docker.iso'/>
	I0717 21:53:34.101336   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <target dev='hdc' bus='scsi'/>
	I0717 21:53:34.101345   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <readonly/>
	I0717 21:53:34.101351   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     </disk>
	I0717 21:53:34.101358   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <disk type='file' device='disk'>
	I0717 21:53:34.101365   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 21:53:34.101377   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <source file='/home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151/ingress-addon-legacy-480151.rawdisk'/>
	I0717 21:53:34.101386   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <target dev='hda' bus='virtio'/>
	I0717 21:53:34.101392   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     </disk>
	I0717 21:53:34.101398   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <interface type='network'>
	I0717 21:53:34.101431   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <source network='mk-ingress-addon-legacy-480151'/>
	I0717 21:53:34.101454   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <model type='virtio'/>
	I0717 21:53:34.101471   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     </interface>
	I0717 21:53:34.101481   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <interface type='network'>
	I0717 21:53:34.101491   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <source network='default'/>
	I0717 21:53:34.101504   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <model type='virtio'/>
	I0717 21:53:34.101535   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     </interface>
	I0717 21:53:34.101553   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <serial type='pty'>
	I0717 21:53:34.101573   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <target port='0'/>
	I0717 21:53:34.101585   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     </serial>
	I0717 21:53:34.101599   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <console type='pty'>
	I0717 21:53:34.101613   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <target type='serial' port='0'/>
	I0717 21:53:34.101645   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     </console>
	I0717 21:53:34.101660   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     <rng model='virtio'>
	I0717 21:53:34.101673   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)       <backend model='random'>/dev/random</backend>
	I0717 21:53:34.101694   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     </rng>
	I0717 21:53:34.101718   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     
	I0717 21:53:34.101730   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)     
	I0717 21:53:34.101750   30805 main.go:141] libmachine: (ingress-addon-legacy-480151)   </devices>
	I0717 21:53:34.101765   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) </domain>
	I0717 21:53:34.101778   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) 
	I0717 21:53:34.107188   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:61:58:3d in network default
	I0717 21:53:34.107758   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Ensuring networks are active...
	I0717 21:53:34.107782   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:34.108524   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Ensuring network default is active
	I0717 21:53:34.108811   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Ensuring network mk-ingress-addon-legacy-480151 is active
	I0717 21:53:34.109362   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Getting domain xml...
	I0717 21:53:34.110202   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Creating domain...
	I0717 21:53:34.459064   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Waiting to get IP...
	I0717 21:53:34.459744   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:34.460217   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:34.460264   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:34.460209   30839 retry.go:31] will retry after 222.579711ms: waiting for machine to come up
	I0717 21:53:34.684631   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:34.685100   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:34.685129   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:34.685046   30839 retry.go:31] will retry after 330.946065ms: waiting for machine to come up
	I0717 21:53:35.017603   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:35.018023   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:35.018054   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:35.017987   30839 retry.go:31] will retry after 474.970792ms: waiting for machine to come up
	I0717 21:53:35.494710   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:35.495097   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:35.495115   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:35.495062   30839 retry.go:31] will retry after 456.4547ms: waiting for machine to come up
	I0717 21:53:35.952707   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:35.953089   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:35.953120   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:35.953046   30839 retry.go:31] will retry after 546.970965ms: waiting for machine to come up
	I0717 21:53:36.501833   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:36.502330   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:36.502352   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:36.502292   30839 retry.go:31] will retry after 913.687262ms: waiting for machine to come up
	I0717 21:53:37.418009   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:37.418449   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:37.418478   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:37.418411   30839 retry.go:31] will retry after 1.177153628s: waiting for machine to come up
	I0717 21:53:38.597869   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:38.598264   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:38.598292   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:38.598205   30839 retry.go:31] will retry after 1.307934051s: waiting for machine to come up
	I0717 21:53:39.908188   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:39.908575   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:39.908612   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:39.908534   30839 retry.go:31] will retry after 1.217302235s: waiting for machine to come up
	I0717 21:53:41.128151   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:41.128551   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:41.128572   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:41.128510   30839 retry.go:31] will retry after 1.780221436s: waiting for machine to come up
	I0717 21:53:42.911356   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:42.911779   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:42.911828   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:42.911747   30839 retry.go:31] will retry after 2.555587886s: waiting for machine to come up
	I0717 21:53:45.468716   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:45.469131   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:45.469159   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:45.469083   30839 retry.go:31] will retry after 3.596487636s: waiting for machine to come up
	I0717 21:53:49.067705   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:49.068157   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:49.068186   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:49.068117   30839 retry.go:31] will retry after 4.450351028s: waiting for machine to come up
	I0717 21:53:53.523616   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:53.523974   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find current IP address of domain ingress-addon-legacy-480151 in network mk-ingress-addon-legacy-480151
	I0717 21:53:53.524004   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | I0717 21:53:53.523932   30839 retry.go:31] will retry after 5.364334338s: waiting for machine to come up
	I0717 21:53:58.890261   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:58.890699   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Found IP for machine: 192.168.39.29
	I0717 21:53:58.890727   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has current primary IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:58.890738   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Reserving static IP address...
	I0717 21:53:58.891168   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-480151", mac: "52:54:00:89:f2:d9", ip: "192.168.39.29"} in network mk-ingress-addon-legacy-480151
	I0717 21:53:58.962353   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Getting to WaitForSSH function...
	I0717 21:53:58.962389   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Reserved static IP address: 192.168.39.29
	I0717 21:53:58.962405   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Waiting for SSH to be available...
	I0717 21:53:58.965067   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:58.965540   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:minikube Clientid:01:52:54:00:89:f2:d9}
	I0717 21:53:58.965582   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:58.965722   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Using SSH client type: external
	I0717 21:53:58.965738   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151/id_rsa (-rw-------)
	I0717 21:53:58.965763   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 21:53:58.965778   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | About to run SSH command:
	I0717 21:53:58.965797   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | exit 0
	I0717 21:53:59.061234   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | SSH cmd err, output: <nil>: 
	I0717 21:53:59.061490   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) KVM machine creation complete!
	I0717 21:53:59.061845   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetConfigRaw
	I0717 21:53:59.062386   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .DriverName
	I0717 21:53:59.062602   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .DriverName
	I0717 21:53:59.062751   30805 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 21:53:59.062766   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetState
	I0717 21:53:59.064111   30805 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 21:53:59.064124   30805 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 21:53:59.064130   30805 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 21:53:59.064137   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:53:59.066102   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.066422   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:53:59.066453   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.066516   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHPort
	I0717 21:53:59.066679   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:53:59.066831   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:53:59.066937   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHUsername
	I0717 21:53:59.067100   30805 main.go:141] libmachine: Using SSH client type: native
	I0717 21:53:59.067515   30805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0717 21:53:59.067528   30805 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 21:53:59.192788   30805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:53:59.192812   30805 main.go:141] libmachine: Detecting the provisioner...
	I0717 21:53:59.192819   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:53:59.195405   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.195754   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:53:59.195788   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.195915   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHPort
	I0717 21:53:59.196106   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:53:59.196285   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:53:59.196394   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHUsername
	I0717 21:53:59.196571   30805 main.go:141] libmachine: Using SSH client type: native
	I0717 21:53:59.197090   30805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0717 21:53:59.197107   30805 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 21:53:59.322391   30805 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0717 21:53:59.322465   30805 main.go:141] libmachine: found compatible host: buildroot
	I0717 21:53:59.322479   30805 main.go:141] libmachine: Provisioning with buildroot...
	I0717 21:53:59.322493   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetMachineName
	I0717 21:53:59.322754   30805 buildroot.go:166] provisioning hostname "ingress-addon-legacy-480151"
	I0717 21:53:59.322775   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetMachineName
	I0717 21:53:59.322952   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:53:59.325312   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.325702   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:53:59.325739   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.325905   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHPort
	I0717 21:53:59.326084   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:53:59.326215   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:53:59.326310   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHUsername
	I0717 21:53:59.326441   30805 main.go:141] libmachine: Using SSH client type: native
	I0717 21:53:59.326989   30805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0717 21:53:59.327006   30805 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-480151 && echo "ingress-addon-legacy-480151" | sudo tee /etc/hostname
	I0717 21:53:59.466398   30805 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-480151
	
	I0717 21:53:59.466426   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:53:59.469233   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.469603   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:53:59.469639   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.469778   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHPort
	I0717 21:53:59.469956   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:53:59.470113   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:53:59.470227   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHUsername
	I0717 21:53:59.470383   30805 main.go:141] libmachine: Using SSH client type: native
	I0717 21:53:59.470856   30805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0717 21:53:59.470878   30805 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-480151' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-480151/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-480151' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:53:59.605404   30805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:53:59.605432   30805 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 21:53:59.605464   30805 buildroot.go:174] setting up certificates
	I0717 21:53:59.605477   30805 provision.go:83] configureAuth start
	I0717 21:53:59.605487   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetMachineName
	I0717 21:53:59.605821   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetIP
	I0717 21:53:59.608576   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.608938   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:53:59.608970   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.609188   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:53:59.611402   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.611723   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:53:59.611755   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.611829   30805 provision.go:138] copyHostCerts
	I0717 21:53:59.611868   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 21:53:59.611908   30805 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 21:53:59.611924   30805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 21:53:59.612002   30805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 21:53:59.612110   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 21:53:59.612133   30805 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 21:53:59.612141   30805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 21:53:59.612179   30805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 21:53:59.612253   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 21:53:59.612284   30805 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 21:53:59.612293   30805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 21:53:59.612330   30805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 21:53:59.612432   30805 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-480151 san=[192.168.39.29 192.168.39.29 localhost 127.0.0.1 minikube ingress-addon-legacy-480151]
	I0717 21:53:59.823741   30805 provision.go:172] copyRemoteCerts
	I0717 21:53:59.823799   30805 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:53:59.823822   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:53:59.826884   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.827237   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:53:59.827266   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.827392   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHPort
	I0717 21:53:59.827616   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:53:59.827775   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHUsername
	I0717 21:53:59.827913   30805 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151/id_rsa Username:docker}
	I0717 21:53:59.918996   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 21:53:59.919062   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 21:53:59.941371   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 21:53:59.941432   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0717 21:53:59.963258   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 21:53:59.963342   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 21:53:59.985648   30805 provision.go:86] duration metric: configureAuth took 380.15864ms
	I0717 21:53:59.985696   30805 buildroot.go:189] setting minikube options for container-runtime
	I0717 21:53:59.985866   30805 config.go:182] Loaded profile config "ingress-addon-legacy-480151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0717 21:53:59.985935   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:53:59.988769   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.989167   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:53:59.989198   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:53:59.989350   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHPort
	I0717 21:53:59.989553   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:53:59.989730   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:53:59.989892   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHUsername
	I0717 21:53:59.990054   30805 main.go:141] libmachine: Using SSH client type: native
	I0717 21:53:59.990484   30805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0717 21:53:59.990501   30805 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 21:54:00.313735   30805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 21:54:00.313759   30805 main.go:141] libmachine: Checking connection to Docker...
	I0717 21:54:00.313777   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetURL
	I0717 21:54:00.314970   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Using libvirt version 6000000
	I0717 21:54:00.316855   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.317181   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:54:00.317224   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.317446   30805 main.go:141] libmachine: Docker is up and running!
	I0717 21:54:00.317462   30805 main.go:141] libmachine: Reticulating splines...
	I0717 21:54:00.317467   30805 client.go:171] LocalClient.Create took 26.636796247s
	I0717 21:54:00.317491   30805 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-480151" took 26.636882477s
	I0717 21:54:00.317504   30805 start.go:300] post-start starting for "ingress-addon-legacy-480151" (driver="kvm2")
	I0717 21:54:00.317533   30805 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:54:00.317555   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .DriverName
	I0717 21:54:00.317782   30805 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:54:00.317804   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:54:00.319860   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.320160   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:54:00.320190   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.320313   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHPort
	I0717 21:54:00.320501   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:54:00.320642   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHUsername
	I0717 21:54:00.320793   30805 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151/id_rsa Username:docker}
	I0717 21:54:00.411364   30805 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:54:00.415705   30805 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 21:54:00.415742   30805 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 21:54:00.415817   30805 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 21:54:00.415898   30805 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 21:54:00.415909   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> /etc/ssl/certs/229902.pem
	I0717 21:54:00.416003   30805 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 21:54:00.424781   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 21:54:00.447253   30805 start.go:303] post-start completed in 129.732819ms
	I0717 21:54:00.447297   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetConfigRaw
	I0717 21:54:00.447862   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetIP
	I0717 21:54:00.450178   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.450506   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:54:00.450542   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.450726   30805 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/config.json ...
	I0717 21:54:00.450914   30805 start.go:128] duration metric: createHost completed in 26.788822287s
	I0717 21:54:00.450939   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:54:00.453117   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.453472   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:54:00.453494   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.453631   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHPort
	I0717 21:54:00.453805   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:54:00.453928   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:54:00.454054   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHUsername
	I0717 21:54:00.454204   30805 main.go:141] libmachine: Using SSH client type: native
	I0717 21:54:00.454580   30805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0717 21:54:00.454591   30805 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 21:54:00.582235   30805 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689630840.556868844
	
	I0717 21:54:00.582270   30805 fix.go:206] guest clock: 1689630840.556868844
	I0717 21:54:00.582280   30805 fix.go:219] Guest: 2023-07-17 21:54:00.556868844 +0000 UTC Remote: 2023-07-17 21:54:00.450926414 +0000 UTC m=+31.074099028 (delta=105.94243ms)
	I0717 21:54:00.582304   30805 fix.go:190] guest clock delta is within tolerance: 105.94243ms
	I0717 21:54:00.582311   30805 start.go:83] releasing machines lock for "ingress-addon-legacy-480151", held for 26.920318373s
	I0717 21:54:00.582337   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .DriverName
	I0717 21:54:00.582602   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetIP
	I0717 21:54:00.585146   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.585425   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:54:00.585452   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.585573   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .DriverName
	I0717 21:54:00.586028   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .DriverName
	I0717 21:54:00.586211   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .DriverName
	I0717 21:54:00.586296   30805 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 21:54:00.586345   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:54:00.586424   30805 ssh_runner.go:195] Run: cat /version.json
	I0717 21:54:00.586443   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:54:00.588829   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.588860   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.589143   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:54:00.589177   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.589276   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHPort
	I0717 21:54:00.589281   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:54:00.589317   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:00.589435   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHPort
	I0717 21:54:00.589498   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:54:00.589593   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:54:00.589667   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHUsername
	I0717 21:54:00.589742   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHUsername
	I0717 21:54:00.589812   30805 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151/id_rsa Username:docker}
	I0717 21:54:00.589844   30805 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151/id_rsa Username:docker}
	I0717 21:54:00.709335   30805 ssh_runner.go:195] Run: systemctl --version
	I0717 21:54:00.715065   30805 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 21:54:00.875717   30805 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 21:54:00.881641   30805 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 21:54:00.881715   30805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:54:00.897639   30805 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 21:54:00.897661   30805 start.go:466] detecting cgroup driver to use...
	I0717 21:54:00.897724   30805 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 21:54:00.915307   30805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 21:54:00.928490   30805 docker.go:196] disabling cri-docker service (if available) ...
	I0717 21:54:00.928541   30805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 21:54:00.942126   30805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 21:54:00.958308   30805 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 21:54:01.068203   30805 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 21:54:01.185634   30805 docker.go:212] disabling docker service ...
	I0717 21:54:01.185721   30805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 21:54:01.198140   30805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 21:54:01.209768   30805 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 21:54:01.313817   30805 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 21:54:01.426160   30805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 21:54:01.439027   30805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 21:54:01.456262   30805 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 21:54:01.456320   30805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:54:01.465209   30805 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 21:54:01.465292   30805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:54:01.474225   30805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:54:01.483167   30805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:54:01.492084   30805 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 21:54:01.501746   30805 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 21:54:01.510176   30805 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 21:54:01.510248   30805 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 21:54:01.523083   30805 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 21:54:01.532122   30805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 21:54:01.637418   30805 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 21:54:01.802747   30805 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 21:54:01.802814   30805 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 21:54:01.807788   30805 start.go:534] Will wait 60s for crictl version
	I0717 21:54:01.807845   30805 ssh_runner.go:195] Run: which crictl
	I0717 21:54:01.811406   30805 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 21:54:01.843356   30805 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 21:54:01.843434   30805 ssh_runner.go:195] Run: crio --version
	I0717 21:54:01.892926   30805 ssh_runner.go:195] Run: crio --version
	I0717 21:54:01.944639   30805 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0717 21:54:01.946044   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetIP
	I0717 21:54:01.948879   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:01.949148   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:54:01.949180   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:01.949415   30805 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 21:54:01.953749   30805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:54:01.968023   30805 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 21:54:01.968077   30805 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:54:02.001394   30805 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 21:54:02.001466   30805 ssh_runner.go:195] Run: which lz4
	I0717 21:54:02.005421   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 21:54:02.005546   30805 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 21:54:02.009838   30805 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 21:54:02.009867   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0717 21:54:03.921050   30805 crio.go:444] Took 1.915557 seconds to copy over tarball
	I0717 21:54:03.921127   30805 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 21:54:07.026742   30805 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.10558754s)
	I0717 21:54:07.026764   30805 crio.go:451] Took 3.105691 seconds to extract the tarball
	I0717 21:54:07.026772   30805 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 21:54:07.070931   30805 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:54:07.126273   30805 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 21:54:07.126298   30805 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 21:54:07.126363   30805 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 21:54:07.126392   30805 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 21:54:07.126392   30805 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 21:54:07.126406   30805 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 21:54:07.126372   30805 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:54:07.126555   30805 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 21:54:07.126590   30805 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 21:54:07.126557   30805 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 21:54:07.127655   30805 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:54:07.127657   30805 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 21:54:07.127660   30805 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 21:54:07.127687   30805 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 21:54:07.127654   30805 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 21:54:07.127706   30805 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 21:54:07.127777   30805 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 21:54:07.127844   30805 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 21:54:07.322316   30805 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0717 21:54:07.329369   30805 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0717 21:54:07.332813   30805 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0717 21:54:07.333231   30805 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 21:54:07.334616   30805 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0717 21:54:07.336342   30805 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 21:54:07.341472   30805 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0717 21:54:07.412524   30805 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:54:07.432705   30805 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0717 21:54:07.432754   30805 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0717 21:54:07.432803   30805 ssh_runner.go:195] Run: which crictl
	I0717 21:54:07.501312   30805 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0717 21:54:07.501344   30805 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 21:54:07.501378   30805 ssh_runner.go:195] Run: which crictl
	I0717 21:54:07.526744   30805 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0717 21:54:07.526780   30805 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0717 21:54:07.526821   30805 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 21:54:07.526865   30805 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 21:54:07.526876   30805 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0717 21:54:07.526896   30805 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 21:54:07.526916   30805 ssh_runner.go:195] Run: which crictl
	I0717 21:54:07.526923   30805 ssh_runner.go:195] Run: which crictl
	I0717 21:54:07.526825   30805 ssh_runner.go:195] Run: which crictl
	I0717 21:54:07.530296   30805 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0717 21:54:07.530345   30805 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 21:54:07.530381   30805 ssh_runner.go:195] Run: which crictl
	I0717 21:54:07.532867   30805 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0717 21:54:07.532898   30805 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 21:54:07.532947   30805 ssh_runner.go:195] Run: which crictl
	I0717 21:54:07.665422   30805 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0717 21:54:07.665503   30805 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0717 21:54:07.665546   30805 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0717 21:54:07.665620   30805 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 21:54:07.665677   30805 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0717 21:54:07.665720   30805 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 21:54:07.665753   30805 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0717 21:54:07.772411   30805 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 21:54:07.772467   30805 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0717 21:54:07.772467   30805 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0717 21:54:07.772526   30805 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0717 21:54:07.777597   30805 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0717 21:54:07.777658   30805 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0717 21:54:07.777690   30805 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0717 21:54:07.777742   30805 cache_images.go:92] LoadImages completed in 651.424027ms
	W0717 21:54:07.777821   30805 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0717 21:54:07.777888   30805 ssh_runner.go:195] Run: crio config
	I0717 21:54:07.834285   30805 cni.go:84] Creating CNI manager for ""
	I0717 21:54:07.834318   30805 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 21:54:07.834332   30805 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 21:54:07.834352   30805 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-480151 NodeName:ingress-addon-legacy-480151 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 21:54:07.834526   30805 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-480151"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 21:54:07.834624   30805 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-480151 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-480151 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 21:54:07.834692   30805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0717 21:54:07.845243   30805 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 21:54:07.845314   30805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 21:54:07.854339   30805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0717 21:54:07.870405   30805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0717 21:54:07.886262   30805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0717 21:54:07.901960   30805 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0717 21:54:07.905600   30805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:54:07.917984   30805 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151 for IP: 192.168.39.29
	I0717 21:54:07.918021   30805 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:54:07.918182   30805 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 21:54:07.918244   30805 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 21:54:07.918305   30805 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.key
	I0717 21:54:07.918330   30805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt with IP's: []
	I0717 21:54:08.151425   30805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt ...
	I0717 21:54:08.151453   30805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: {Name:mk2e85b2cc077d2711999e329636b16d6324ba08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:54:08.151632   30805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.key ...
	I0717 21:54:08.151645   30805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.key: {Name:mk16bfaec12774b180b8a39b833b2edd063bf141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:54:08.151716   30805 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.key.8e23a1a4
	I0717 21:54:08.151730   30805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.crt.8e23a1a4 with IP's: [192.168.39.29 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 21:54:08.306411   30805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.crt.8e23a1a4 ...
	I0717 21:54:08.306438   30805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.crt.8e23a1a4: {Name:mk8eb3e36e1fee3ed41a41fefc7538cb9fd9ec43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:54:08.306577   30805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.key.8e23a1a4 ...
	I0717 21:54:08.306587   30805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.key.8e23a1a4: {Name:mk23da21834abffd169032b93fdaeab0c768f2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:54:08.306651   30805 certs.go:337] copying /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.crt.8e23a1a4 -> /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.crt
	I0717 21:54:08.306723   30805 certs.go:341] copying /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.key.8e23a1a4 -> /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.key
	I0717 21:54:08.306812   30805 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/proxy-client.key
	I0717 21:54:08.306826   30805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/proxy-client.crt with IP's: []
	I0717 21:54:08.540895   30805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/proxy-client.crt ...
	I0717 21:54:08.540922   30805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/proxy-client.crt: {Name:mkac17fc5af380cee2e1ca2e60578955073c73da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:54:08.541093   30805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/proxy-client.key ...
	I0717 21:54:08.541109   30805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/proxy-client.key: {Name:mkffe4b4d74e67c295fd346dc4d23c08c0e4544e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:54:08.541207   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 21:54:08.541225   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 21:54:08.541243   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 21:54:08.541261   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 21:54:08.541278   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 21:54:08.541292   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 21:54:08.541304   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 21:54:08.541317   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 21:54:08.541380   30805 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 21:54:08.541425   30805 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 21:54:08.541440   30805 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 21:54:08.541473   30805 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 21:54:08.541505   30805 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 21:54:08.541562   30805 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 21:54:08.541621   30805 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 21:54:08.541670   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> /usr/share/ca-certificates/229902.pem
	I0717 21:54:08.541689   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:54:08.541705   30805 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem -> /usr/share/ca-certificates/22990.pem
	I0717 21:54:08.542261   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 21:54:08.566549   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 21:54:08.589078   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 21:54:08.611436   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 21:54:08.634078   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 21:54:08.657238   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 21:54:08.679964   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 21:54:08.702574   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 21:54:08.725645   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 21:54:08.748798   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 21:54:08.770839   30805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 21:54:08.793642   30805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 21:54:08.809986   30805 ssh_runner.go:195] Run: openssl version
	I0717 21:54:08.815540   30805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 21:54:08.825566   30805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 21:54:08.829993   30805 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 21:54:08.830060   30805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 21:54:08.835503   30805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 21:54:08.845447   30805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 21:54:08.855369   30805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:54:08.859727   30805 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:54:08.859774   30805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:54:08.865171   30805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 21:54:08.874923   30805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 21:54:08.884219   30805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 21:54:08.888748   30805 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 21:54:08.888801   30805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 21:54:08.894802   30805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 21:54:08.904837   30805 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 21:54:08.909602   30805 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 21:54:08.909657   30805 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-480151 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-
480151 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:54:08.909829   30805 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 21:54:08.909924   30805 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 21:54:08.948575   30805 cri.go:89] found id: ""
	I0717 21:54:08.948640   30805 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 21:54:08.957476   30805 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 21:54:08.965654   30805 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 21:54:08.974183   30805 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 21:54:08.974243   30805 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0717 21:54:09.027046   30805 kubeadm.go:322] W0717 21:54:09.010333     965 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0717 21:54:09.142409   30805 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 21:54:11.982854   30805 kubeadm.go:322] W0717 21:54:11.968993     965 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 21:54:11.984302   30805 kubeadm.go:322] W0717 21:54:11.970396     965 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 21:54:22.528982   30805 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0717 21:54:22.529054   30805 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 21:54:22.529150   30805 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 21:54:22.529267   30805 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 21:54:22.529373   30805 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 21:54:22.529494   30805 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 21:54:22.529632   30805 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 21:54:22.529689   30805 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 21:54:22.529784   30805 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 21:54:22.531477   30805 out.go:204]   - Generating certificates and keys ...
	I0717 21:54:22.531555   30805 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 21:54:22.531654   30805 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 21:54:22.531720   30805 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 21:54:22.531784   30805 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 21:54:22.531924   30805 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 21:54:22.531996   30805 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 21:54:22.532067   30805 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 21:54:22.532230   30805 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-480151 localhost] and IPs [192.168.39.29 127.0.0.1 ::1]
	I0717 21:54:22.532277   30805 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 21:54:22.532433   30805 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-480151 localhost] and IPs [192.168.39.29 127.0.0.1 ::1]
	I0717 21:54:22.532489   30805 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 21:54:22.532575   30805 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 21:54:22.532661   30805 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 21:54:22.532712   30805 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 21:54:22.532762   30805 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 21:54:22.532818   30805 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 21:54:22.532899   30805 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 21:54:22.532947   30805 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 21:54:22.533022   30805 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 21:54:22.534935   30805 out.go:204]   - Booting up control plane ...
	I0717 21:54:22.535025   30805 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 21:54:22.535106   30805 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 21:54:22.535177   30805 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 21:54:22.535250   30805 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 21:54:22.535394   30805 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 21:54:22.535502   30805 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004495 seconds
	I0717 21:54:22.535638   30805 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 21:54:22.535843   30805 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 21:54:22.535925   30805 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 21:54:22.536086   30805 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-480151 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 21:54:22.536160   30805 kubeadm.go:322] [bootstrap-token] Using token: jjiljc.hq12w1n640rv0i3e
	I0717 21:54:22.538778   30805 out.go:204]   - Configuring RBAC rules ...
	I0717 21:54:22.538905   30805 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 21:54:22.539022   30805 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 21:54:22.539228   30805 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 21:54:22.539367   30805 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 21:54:22.539528   30805 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 21:54:22.539617   30805 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 21:54:22.539715   30805 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 21:54:22.539753   30805 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 21:54:22.539829   30805 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 21:54:22.539844   30805 kubeadm.go:322] 
	I0717 21:54:22.539930   30805 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 21:54:22.539940   30805 kubeadm.go:322] 
	I0717 21:54:22.540032   30805 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 21:54:22.540040   30805 kubeadm.go:322] 
	I0717 21:54:22.540063   30805 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 21:54:22.540129   30805 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 21:54:22.540175   30805 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 21:54:22.540181   30805 kubeadm.go:322] 
	I0717 21:54:22.540222   30805 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 21:54:22.540366   30805 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 21:54:22.540466   30805 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 21:54:22.540475   30805 kubeadm.go:322] 
	I0717 21:54:22.540542   30805 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 21:54:22.540624   30805 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 21:54:22.540635   30805 kubeadm.go:322] 
	I0717 21:54:22.540755   30805 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jjiljc.hq12w1n640rv0i3e \
	I0717 21:54:22.540908   30805 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 21:54:22.540949   30805 kubeadm.go:322]     --control-plane 
	I0717 21:54:22.540959   30805 kubeadm.go:322] 
	I0717 21:54:22.541078   30805 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 21:54:22.541091   30805 kubeadm.go:322] 
	I0717 21:54:22.541205   30805 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jjiljc.hq12w1n640rv0i3e \
	I0717 21:54:22.541328   30805 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 21:54:22.541341   30805 cni.go:84] Creating CNI manager for ""
	I0717 21:54:22.541350   30805 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 21:54:22.543086   30805 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 21:54:22.544519   30805 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 21:54:22.570054   30805 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 21:54:22.599066   30805 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 21:54:22.599163   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:22.599171   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=ingress-addon-legacy-480151 minikube.k8s.io/updated_at=2023_07_17T21_54_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:22.646476   30805 ops.go:34] apiserver oom_adj: -16
	I0717 21:54:22.930924   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:23.645126   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:24.144869   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:24.645469   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:25.144547   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:25.644692   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:26.145094   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:26.645460   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:27.145038   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:27.644564   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:28.145400   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:28.645224   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:29.144730   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:29.645474   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:30.144921   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:30.644778   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:31.145511   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:31.645474   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:32.144825   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:32.645401   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:33.145220   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:33.644482   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:34.144845   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:34.645009   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:35.145104   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:35.644491   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:36.145473   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:36.644664   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:37.144488   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:37.714930   30805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:54:37.916426   30805 kubeadm.go:1081] duration metric: took 15.317333013s to wait for elevateKubeSystemPrivileges.
	I0717 21:54:37.916460   30805 kubeadm.go:406] StartCluster complete in 29.006805076s
	I0717 21:54:37.916498   30805 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:54:37.916582   30805 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 21:54:37.917492   30805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:54:37.917812   30805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 21:54:37.917902   30805 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 21:54:37.918006   30805 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-480151"
	I0717 21:54:37.918028   30805 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-480151"
	I0717 21:54:37.918035   30805 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-480151"
	I0717 21:54:37.918044   30805 config.go:182] Loaded profile config "ingress-addon-legacy-480151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0717 21:54:37.918058   30805 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-480151"
	I0717 21:54:37.918070   30805 host.go:66] Checking if "ingress-addon-legacy-480151" exists ...
	I0717 21:54:37.918467   30805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:54:37.918516   30805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:54:37.918514   30805 kapi.go:59] client config for ingress-addon-legacy-480151: &rest.Config{Host:"https://192.168.39.29:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 21:54:37.918641   30805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:54:37.918680   30805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:54:37.919482   30805 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 21:54:37.936816   30805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40027
	I0717 21:54:37.936815   30805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34267
	I0717 21:54:37.937275   30805 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:54:37.937370   30805 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:54:37.937838   30805 main.go:141] libmachine: Using API Version  1
	I0717 21:54:37.937864   30805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:54:37.937981   30805 main.go:141] libmachine: Using API Version  1
	I0717 21:54:37.937997   30805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:54:37.938286   30805 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:54:37.938334   30805 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:54:37.938486   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetState
	I0717 21:54:37.938938   30805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:54:37.938984   30805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:54:37.941318   30805 kapi.go:59] client config for ingress-addon-legacy-480151: &rest.Config{Host:"https://192.168.39.29:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 21:54:37.955830   30805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
	I0717 21:54:37.956270   30805 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:54:37.956813   30805 main.go:141] libmachine: Using API Version  1
	I0717 21:54:37.956836   30805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:54:37.957160   30805 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:54:37.957381   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetState
	I0717 21:54:37.958957   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .DriverName
	I0717 21:54:37.961216   30805 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:54:37.960781   30805 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-480151"
	I0717 21:54:37.961268   30805 host.go:66] Checking if "ingress-addon-legacy-480151" exists ...
	I0717 21:54:37.962961   30805 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:54:37.962974   30805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 21:54:37.961677   30805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:54:37.962990   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:54:37.963019   30805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:54:37.966463   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:37.966971   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:54:37.967007   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:37.967318   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHPort
	I0717 21:54:37.967523   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:54:37.967723   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHUsername
	I0717 21:54:37.967915   30805 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151/id_rsa Username:docker}
	I0717 21:54:37.979001   30805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0717 21:54:37.979399   30805 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:54:37.979942   30805 main.go:141] libmachine: Using API Version  1
	I0717 21:54:37.979966   30805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:54:37.980283   30805 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:54:37.980769   30805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:54:37.980803   30805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:54:37.996072   30805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0717 21:54:37.996538   30805 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:54:37.997071   30805 main.go:141] libmachine: Using API Version  1
	I0717 21:54:37.997092   30805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:54:37.997452   30805 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:54:37.997672   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetState
	I0717 21:54:37.999266   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .DriverName
	I0717 21:54:37.999521   30805 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 21:54:37.999538   30805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 21:54:37.999554   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHHostname
	I0717 21:54:38.002540   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:38.003073   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:f2:d9", ip: ""} in network mk-ingress-addon-legacy-480151: {Iface:virbr1 ExpiryTime:2023-07-17 22:53:48 +0000 UTC Type:0 Mac:52:54:00:89:f2:d9 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:ingress-addon-legacy-480151 Clientid:01:52:54:00:89:f2:d9}
	I0717 21:54:38.003107   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | domain ingress-addon-legacy-480151 has defined IP address 192.168.39.29 and MAC address 52:54:00:89:f2:d9 in network mk-ingress-addon-legacy-480151
	I0717 21:54:38.003253   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHPort
	I0717 21:54:38.003429   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHKeyPath
	I0717 21:54:38.003612   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .GetSSHUsername
	I0717 21:54:38.003748   30805 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/ingress-addon-legacy-480151/id_rsa Username:docker}
	I0717 21:54:38.317715   30805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:54:38.326366   30805 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 21:54:38.369232   30805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 21:54:38.533397   30805 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-480151" context rescaled to 1 replicas
	I0717 21:54:38.533434   30805 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 21:54:38.536292   30805 out.go:177] * Verifying Kubernetes components...
	I0717 21:54:38.537777   30805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:54:39.319251   30805 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.001501158s)
	I0717 21:54:39.319302   30805 main.go:141] libmachine: Making call to close driver server
	I0717 21:54:39.319311   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .Close
	I0717 21:54:39.319329   30805 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 21:54:39.319389   30805 main.go:141] libmachine: Making call to close driver server
	I0717 21:54:39.319407   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .Close
	I0717 21:54:39.319604   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Closing plugin on server side
	I0717 21:54:39.319646   30805 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:54:39.319656   30805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:54:39.319692   30805 main.go:141] libmachine: Making call to close driver server
	I0717 21:54:39.319708   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .Close
	I0717 21:54:39.319734   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Closing plugin on server side
	I0717 21:54:39.319757   30805 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:54:39.319775   30805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:54:39.319788   30805 main.go:141] libmachine: Making call to close driver server
	I0717 21:54:39.319800   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .Close
	I0717 21:54:39.320155   30805 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:54:39.320161   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Closing plugin on server side
	I0717 21:54:39.320168   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) DBG | Closing plugin on server side
	I0717 21:54:39.320171   30805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:54:39.320193   30805 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:54:39.320203   30805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:54:39.320207   30805 main.go:141] libmachine: Making call to close driver server
	I0717 21:54:39.320217   30805 main.go:141] libmachine: (ingress-addon-legacy-480151) Calling .Close
	I0717 21:54:39.320151   30805 kapi.go:59] client config for ingress-addon-legacy-480151: &rest.Config{Host:"https://192.168.39.29:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 21:54:39.320394   30805 main.go:141] libmachine: Successfully made call to close driver server
	I0717 21:54:39.320485   30805 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 21:54:39.320492   30805 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-480151" to be "Ready" ...
	I0717 21:54:39.322586   30805 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 21:54:39.324542   30805 addons.go:502] enable addons completed in 1.406647512s: enabled=[storage-provisioner default-storageclass]
	I0717 21:54:39.330695   30805 node_ready.go:49] node "ingress-addon-legacy-480151" has status "Ready":"True"
	I0717 21:54:39.330713   30805 node_ready.go:38] duration metric: took 10.208017ms waiting for node "ingress-addon-legacy-480151" to be "Ready" ...
	I0717 21:54:39.330721   30805 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:54:39.353846   30805 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-q6th7" in "kube-system" namespace to be "Ready" ...
	I0717 21:54:41.375461   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:54:43.874650   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:54:45.874890   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:54:48.375471   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:54:50.376417   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:54:52.876027   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:54:55.374578   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:54:57.374625   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:54:59.375911   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:55:01.875142   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:55:03.875947   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:55:05.877219   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:55:08.374639   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:55:10.375478   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:55:12.874530   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:55:14.875108   30805 pod_ready.go:102] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"False"
	I0717 21:55:16.378299   30805 pod_ready.go:92] pod "coredns-66bff467f8-q6th7" in "kube-system" namespace has status "Ready":"True"
	I0717 21:55:16.378324   30805 pod_ready.go:81] duration metric: took 37.024449662s waiting for pod "coredns-66bff467f8-q6th7" in "kube-system" namespace to be "Ready" ...
	I0717 21:55:16.378333   30805 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-480151" in "kube-system" namespace to be "Ready" ...
	I0717 21:55:16.387067   30805 pod_ready.go:92] pod "etcd-ingress-addon-legacy-480151" in "kube-system" namespace has status "Ready":"True"
	I0717 21:55:16.387090   30805 pod_ready.go:81] duration metric: took 8.75043ms waiting for pod "etcd-ingress-addon-legacy-480151" in "kube-system" namespace to be "Ready" ...
	I0717 21:55:16.387099   30805 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-480151" in "kube-system" namespace to be "Ready" ...
	I0717 21:55:16.396669   30805 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-480151" in "kube-system" namespace has status "Ready":"True"
	I0717 21:55:16.396696   30805 pod_ready.go:81] duration metric: took 9.590027ms waiting for pod "kube-apiserver-ingress-addon-legacy-480151" in "kube-system" namespace to be "Ready" ...
	I0717 21:55:16.396708   30805 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-480151" in "kube-system" namespace to be "Ready" ...
	I0717 21:55:16.405208   30805 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-480151" in "kube-system" namespace has status "Ready":"True"
	I0717 21:55:16.405230   30805 pod_ready.go:81] duration metric: took 8.514063ms waiting for pod "kube-controller-manager-ingress-addon-legacy-480151" in "kube-system" namespace to be "Ready" ...
	I0717 21:55:16.405238   30805 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s269q" in "kube-system" namespace to be "Ready" ...
	I0717 21:55:16.412009   30805 pod_ready.go:92] pod "kube-proxy-s269q" in "kube-system" namespace has status "Ready":"True"
	I0717 21:55:16.412030   30805 pod_ready.go:81] duration metric: took 6.784717ms waiting for pod "kube-proxy-s269q" in "kube-system" namespace to be "Ready" ...
	I0717 21:55:16.412041   30805 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-480151" in "kube-system" namespace to be "Ready" ...
	I0717 21:55:16.568355   30805 request.go:628] Waited for 156.240712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.29:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-480151
	I0717 21:55:16.768229   30805 request.go:628] Waited for 196.317213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.29:8443/api/v1/nodes/ingress-addon-legacy-480151
	I0717 21:55:16.771789   30805 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-480151" in "kube-system" namespace has status "Ready":"True"
	I0717 21:55:16.771811   30805 pod_ready.go:81] duration metric: took 359.762185ms waiting for pod "kube-scheduler-ingress-addon-legacy-480151" in "kube-system" namespace to be "Ready" ...
	I0717 21:55:16.771825   30805 pod_ready.go:38] duration metric: took 37.441095831s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:55:16.771843   30805 api_server.go:52] waiting for apiserver process to appear ...
	I0717 21:55:16.771894   30805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 21:55:16.788035   30805 api_server.go:72] duration metric: took 38.25457746s to wait for apiserver process to appear ...
	I0717 21:55:16.788058   30805 api_server.go:88] waiting for apiserver healthz status ...
	I0717 21:55:16.788079   30805 api_server.go:253] Checking apiserver healthz at https://192.168.39.29:8443/healthz ...
	I0717 21:55:16.794002   30805 api_server.go:279] https://192.168.39.29:8443/healthz returned 200:
	ok
	I0717 21:55:16.795105   30805 api_server.go:141] control plane version: v1.18.20
	I0717 21:55:16.795126   30805 api_server.go:131] duration metric: took 7.063092ms to wait for apiserver health ...
	I0717 21:55:16.795133   30805 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 21:55:16.968537   30805 request.go:628] Waited for 173.34366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.29:8443/api/v1/namespaces/kube-system/pods
	I0717 21:55:16.990309   30805 system_pods.go:59] 7 kube-system pods found
	I0717 21:55:16.990339   30805 system_pods.go:61] "coredns-66bff467f8-q6th7" [c97a38d0-ee22-4e40-ae86-0d3e4c577f08] Running
	I0717 21:55:16.990344   30805 system_pods.go:61] "etcd-ingress-addon-legacy-480151" [d178b90b-9976-4ac4-aa3c-fa6f5146f854] Running
	I0717 21:55:16.990349   30805 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-480151" [da58de4d-423b-48da-a365-e5dd024321bd] Running
	I0717 21:55:16.990353   30805 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-480151" [bb351c20-9f65-44a9-943b-0501ef403000] Running
	I0717 21:55:16.990357   30805 system_pods.go:61] "kube-proxy-s269q" [ae6a31cf-4872-4d95-9655-8211e20b96ab] Running
	I0717 21:55:16.990361   30805 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-480151" [7a484e7c-bdff-4e18-b941-84ce2682aeba] Running
	I0717 21:55:16.990367   30805 system_pods.go:61] "storage-provisioner" [112e8637-b15d-420f-8887-85df1e33883e] Running
	I0717 21:55:16.990372   30805 system_pods.go:74] duration metric: took 195.234426ms to wait for pod list to return data ...
	I0717 21:55:16.990378   30805 default_sa.go:34] waiting for default service account to be created ...
	I0717 21:55:17.168827   30805 request.go:628] Waited for 178.379847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.29:8443/api/v1/namespaces/default/serviceaccounts
	I0717 21:55:17.173408   30805 default_sa.go:45] found service account: "default"
	I0717 21:55:17.173429   30805 default_sa.go:55] duration metric: took 183.046471ms for default service account to be created ...
	I0717 21:55:17.173437   30805 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 21:55:17.368868   30805 request.go:628] Waited for 195.379433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.29:8443/api/v1/namespaces/kube-system/pods
	I0717 21:55:17.376840   30805 system_pods.go:86] 7 kube-system pods found
	I0717 21:55:17.376865   30805 system_pods.go:89] "coredns-66bff467f8-q6th7" [c97a38d0-ee22-4e40-ae86-0d3e4c577f08] Running
	I0717 21:55:17.376870   30805 system_pods.go:89] "etcd-ingress-addon-legacy-480151" [d178b90b-9976-4ac4-aa3c-fa6f5146f854] Running
	I0717 21:55:17.376874   30805 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-480151" [da58de4d-423b-48da-a365-e5dd024321bd] Running
	I0717 21:55:17.376878   30805 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-480151" [bb351c20-9f65-44a9-943b-0501ef403000] Running
	I0717 21:55:17.376883   30805 system_pods.go:89] "kube-proxy-s269q" [ae6a31cf-4872-4d95-9655-8211e20b96ab] Running
	I0717 21:55:17.376887   30805 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-480151" [7a484e7c-bdff-4e18-b941-84ce2682aeba] Running
	I0717 21:55:17.376892   30805 system_pods.go:89] "storage-provisioner" [112e8637-b15d-420f-8887-85df1e33883e] Running
	I0717 21:55:17.376897   30805 system_pods.go:126] duration metric: took 203.456295ms to wait for k8s-apps to be running ...
	I0717 21:55:17.376903   30805 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 21:55:17.376939   30805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:55:17.392304   30805 system_svc.go:56] duration metric: took 15.392746ms WaitForService to wait for kubelet.
	I0717 21:55:17.392336   30805 kubeadm.go:581] duration metric: took 38.85887004s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 21:55:17.392373   30805 node_conditions.go:102] verifying NodePressure condition ...
	I0717 21:55:17.568822   30805 request.go:628] Waited for 176.381156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.29:8443/api/v1/nodes
	I0717 21:55:17.572572   30805 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 21:55:17.572603   30805 node_conditions.go:123] node cpu capacity is 2
	I0717 21:55:17.572613   30805 node_conditions.go:105] duration metric: took 180.234248ms to run NodePressure ...
	I0717 21:55:17.572622   30805 start.go:228] waiting for startup goroutines ...
	I0717 21:55:17.572631   30805 start.go:233] waiting for cluster config update ...
	I0717 21:55:17.572640   30805 start.go:242] writing updated cluster config ...
	I0717 21:55:17.572884   30805 ssh_runner.go:195] Run: rm -f paused
	I0717 21:55:17.617780   30805 start.go:578] kubectl: 1.27.3, cluster: 1.18.20 (minor skew: 9)
	I0717 21:55:17.619916   30805 out.go:177] 
	W0717 21:55:17.621398   30805 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.18.20.
	I0717 21:55:17.622947   30805 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0717 21:55:17.624475   30805 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-480151" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 21:53:45 UTC, ends at Mon 2023-07-17 21:58:21 UTC. --
	Jul 17 21:58:20 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:20.952817286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:478fc06166ca9d5a0fa897447b1c3cbf34a89b888468cbbff8a560e92188739e,PodSandboxId:0d43161410b1b321ef6f46825adb659467dd2f97ba6bfd576488c86511ed4c4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689631082216356682,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-6gdxs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e18817a-c990-4093-bf22-29cc0b3ae94c,},Annotations:map[string]string{io.kubernetes.container.hash: c2e40a2c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919acf2780b5d657768cc4589922b0a1d548f646ea662efd44161e6f0745e793,PodSandboxId:40f54d305a919a783370458a0f9af8b28f03b8a6998c8c8826c5a8c1a853551f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630943924788276,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1573b30b-a311-4799-8a91-b4d776ee3681,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 68307758,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb2c081912e9d5416cdef068ec2e91479ddb7dba2e279c6e5ccd982096af18c,PodSandboxId:9564a65054016d7c4fccb53ca698747747fea92d8afe2bc13223a54e35f68441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630881489945505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 112e8637-b15d-420f-8887-85df1e33883e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a653dbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f150b170e27355813eced426bfbaaa9c33c6c671f014e911ad35bf864df25074,PodSandboxId:ee9280c9ac433a43a1c2b7c8a97ea0f29fbf2f530dd3271950f46f79f8ef2f4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689630879517933643,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s269q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6a31cf-4872-4d95-9655-8211e2
0b96ab,},Annotations:map[string]string{io.kubernetes.container.hash: 4da55662,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638bfa80f2c2676a83eb5d32c1e4481a66f127da758c51c35f98f1dabf17e4d7,PodSandboxId:9e525832bc36e67db213405f29ee44abb93680f6c4a8f0057f0d07ede9d35beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689630878897130502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-q6th7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97a38d0-ee22-4e40-ae86-0d3e4c577f08,},Annotations:map[string]string{io
.kubernetes.container.hash: 9a76aff9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941a8310f69d27b61e8c309cc05eec5cee840115a09e6ddef478af54cfbb690e,PodSandboxId:a0b0e50da156210cc5c971d818beeda2b7c1738b071eedb5eee93a315a35bbd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689630855170997423,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0adcee1a489d0b6560d986b235aec76,},Annotations:map[string]string{io.kubernetes.container.hash: a3cab507,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e1d4de0c5aa501f4f2722f3133b6d9a92bcaab7126759b1bf118b29e33d00cb,PodSandboxId:73af9a80a97bf005b752671977dd47d375f8de4e9c55c880949a89ea70acddb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689630854104042803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a172b218a66d24ad799f620fef475aa1922dc36121035bab4a6fffa52f141b,PodSandboxId:aec7786dd4cf36c5270ccb4c2c407206468613d915d7661ed813db276d9773c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689630853765986825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.
kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f42b6c6d1ddc2bd0830750725598fbd9e7111a8e988e49d48d8f2d0e5fa28b,PodSandboxId:4c70f0345e1f1ee09ac8d04bd46dff875e4ea958f7a7e72451e60fb58d04c989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689630853658220211,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c2020d8598b1921e5361eeb5b9b77b,},Annotations:map[string]string{io.kubernetes.container.hash: 831e695b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f483c7d1-2318-415c-9077-b77bd7d9e4da name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.048327245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=79f9b23f-bfa8-448b-98ec-f0d91a70df01 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.048394424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=79f9b23f-bfa8-448b-98ec-f0d91a70df01 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.048677056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:478fc06166ca9d5a0fa897447b1c3cbf34a89b888468cbbff8a560e92188739e,PodSandboxId:0d43161410b1b321ef6f46825adb659467dd2f97ba6bfd576488c86511ed4c4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689631082216356682,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-6gdxs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e18817a-c990-4093-bf22-29cc0b3ae94c,},Annotations:map[string]string{io.kubernetes.container.hash: c2e40a2c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919acf2780b5d657768cc4589922b0a1d548f646ea662efd44161e6f0745e793,PodSandboxId:40f54d305a919a783370458a0f9af8b28f03b8a6998c8c8826c5a8c1a853551f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630943924788276,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1573b30b-a311-4799-8a91-b4d776ee3681,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 68307758,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070b18c34481705b74aa0810461a94494473d2d55640c7f3845af7962a58e62c,PodSandboxId:86b2eda31ad7916f3635d3b26763fe250eccfac5155a247bbe83f08df6f4ac03,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689630929870693590,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6gxjc,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b2d5cb81-e581-4854-b8b9-15968ee13dd1,},Annotations:map[string]string{io.kubernetes.container.hash: 76a0d43d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3984a0d54eba9c094cc436c18e78e24e19c7f43490dacd51b0786ec5880b5611,PodSandboxId:02d6ccb8bb77f2e8bfc9287f535ca5b960152fc99f589c7304be5c6bb4d94f73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920817384282,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6vzwg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77cccfa9-d92c-4d67-9dc0-0ec74f7f643d,},Annotations:map[string]string{io.kubernetes.container.hash: cfa7504f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e84412beed17fd4ac212fd13eda68518d3bebb979a4836d3ab0e7cee140a3cc,PodSandboxId:98b79ea5b36ebda3d21b04a7173ebab0cfa29808800f4400448a5c5af82aae8f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920652387188,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k7xzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45435268-043a-48ed-bb6f-f9665ae3a030,},Annotations:map[string]string{io.kubernetes.container.hash: 913ff72f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb2c081912e9d5416cdef068ec2e91479ddb7dba2e279c6e5ccd982096af18c,PodSandboxId:9564a65054016d7c4fccb53ca698747747fea92d8afe2bc13223a54e35f68441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630881489945505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112e8637-b15d-420f-8887-85df1e33883e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a653dbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f150b170e27355813eced426bfbaaa9c33c6c671f014e911ad35bf864df25074,PodSandboxId:ee9280c9ac433a43a1c2b7c8a97ea0f29fbf2f530dd3271950f46f79f8ef2f4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689630879517933643,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s269q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6a31cf-4872-4d95-9655-8211e20b96ab,},Annotations:map[string]string{io.kubernetes.container.hash: 4da55662,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638bfa80f2c2676a83eb5d32c1e4481a66f127da758c51c35f98f1dabf17e4d7,PodSandboxId:9e525832bc36e67db213405f29ee44abb93680f6c4a8f0057f0d07ede9d35beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689630878897130502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-q6th7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97a38d0-ee22-4e40-ae86-0d3e4c577f08,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76aff9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941a8310f69d27b61e8c309cc05eec5cee840115a09e6ddef478af54cfbb690e,Pod
SandboxId:a0b0e50da156210cc5c971d818beeda2b7c1738b071eedb5eee93a315a35bbd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689630855170997423,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0adcee1a489d0b6560d986b235aec76,},Annotations:map[string]string{io.kubernetes.container.hash: a3cab507,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e1d4de0c5aa501f4f2722f3133b6d9a92bcaab7126759b1bf118b29e33d00cb,PodSandboxId:73af9a80a97bf005b752671977dd47d375f8
de4e9c55c880949a89ea70acddb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689630854104042803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a172b218a66d24ad799f620fef475aa1922dc36121035bab4a6fffa52f141b,PodSandboxId:aec7786dd4cf36c5270ccb4c2c407206468613d915
d7661ed813db276d9773c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689630853765986825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f42b6c6d1ddc2bd0830750725598fbd9e7111a8e988e49d48d8f2d0e5fa28b,PodSandboxId:4c70f0345e1f
1ee09ac8d04bd46dff875e4ea958f7a7e72451e60fb58d04c989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689630853658220211,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c2020d8598b1921e5361eeb5b9b77b,},Annotations:map[string]string{io.kubernetes.container.hash: 831e695b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=79f9b23f-bfa8-448b-98ec-f0d91a70df01 name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.081218570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=37f42ca8-3173-4de2-a075-4d45eeeb4e44 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.081308583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=37f42ca8-3173-4de2-a075-4d45eeeb4e44 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.081606257Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:478fc06166ca9d5a0fa897447b1c3cbf34a89b888468cbbff8a560e92188739e,PodSandboxId:0d43161410b1b321ef6f46825adb659467dd2f97ba6bfd576488c86511ed4c4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689631082216356682,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-6gdxs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e18817a-c990-4093-bf22-29cc0b3ae94c,},Annotations:map[string]string{io.kubernetes.container.hash: c2e40a2c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919acf2780b5d657768cc4589922b0a1d548f646ea662efd44161e6f0745e793,PodSandboxId:40f54d305a919a783370458a0f9af8b28f03b8a6998c8c8826c5a8c1a853551f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630943924788276,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1573b30b-a311-4799-8a91-b4d776ee3681,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 68307758,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070b18c34481705b74aa0810461a94494473d2d55640c7f3845af7962a58e62c,PodSandboxId:86b2eda31ad7916f3635d3b26763fe250eccfac5155a247bbe83f08df6f4ac03,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689630929870693590,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6gxjc,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b2d5cb81-e581-4854-b8b9-15968ee13dd1,},Annotations:map[string]string{io.kubernetes.container.hash: 76a0d43d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3984a0d54eba9c094cc436c18e78e24e19c7f43490dacd51b0786ec5880b5611,PodSandboxId:02d6ccb8bb77f2e8bfc9287f535ca5b960152fc99f589c7304be5c6bb4d94f73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920817384282,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6vzwg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77cccfa9-d92c-4d67-9dc0-0ec74f7f643d,},Annotations:map[string]string{io.kubernetes.container.hash: cfa7504f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e84412beed17fd4ac212fd13eda68518d3bebb979a4836d3ab0e7cee140a3cc,PodSandboxId:98b79ea5b36ebda3d21b04a7173ebab0cfa29808800f4400448a5c5af82aae8f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920652387188,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k7xzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45435268-043a-48ed-bb6f-f9665ae3a030,},Annotations:map[string]string{io.kubernetes.container.hash: 913ff72f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb2c081912e9d5416cdef068ec2e91479ddb7dba2e279c6e5ccd982096af18c,PodSandboxId:9564a65054016d7c4fccb53ca698747747fea92d8afe2bc13223a54e35f68441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630881489945505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112e8637-b15d-420f-8887-85df1e33883e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a653dbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f150b170e27355813eced426bfbaaa9c33c6c671f014e911ad35bf864df25074,PodSandboxId:ee9280c9ac433a43a1c2b7c8a97ea0f29fbf2f530dd3271950f46f79f8ef2f4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689630879517933643,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s269q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6a31cf-4872-4d95-9655-8211e20b96ab,},Annotations:map[string]string{io.kubernetes.container.hash: 4da55662,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638bfa80f2c2676a83eb5d32c1e4481a66f127da758c51c35f98f1dabf17e4d7,PodSandboxId:9e525832bc36e67db213405f29ee44abb93680f6c4a8f0057f0d07ede9d35beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689630878897130502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-q6th7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97a38d0-ee22-4e40-ae86-0d3e4c577f08,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76aff9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941a8310f69d27b61e8c309cc05eec5cee840115a09e6ddef478af54cfbb690e,Pod
SandboxId:a0b0e50da156210cc5c971d818beeda2b7c1738b071eedb5eee93a315a35bbd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689630855170997423,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0adcee1a489d0b6560d986b235aec76,},Annotations:map[string]string{io.kubernetes.container.hash: a3cab507,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e1d4de0c5aa501f4f2722f3133b6d9a92bcaab7126759b1bf118b29e33d00cb,PodSandboxId:73af9a80a97bf005b752671977dd47d375f8
de4e9c55c880949a89ea70acddb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689630854104042803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a172b218a66d24ad799f620fef475aa1922dc36121035bab4a6fffa52f141b,PodSandboxId:aec7786dd4cf36c5270ccb4c2c407206468613d915
d7661ed813db276d9773c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689630853765986825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f42b6c6d1ddc2bd0830750725598fbd9e7111a8e988e49d48d8f2d0e5fa28b,PodSandboxId:4c70f0345e1f
1ee09ac8d04bd46dff875e4ea958f7a7e72451e60fb58d04c989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689630853658220211,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c2020d8598b1921e5361eeb5b9b77b,},Annotations:map[string]string{io.kubernetes.container.hash: 831e695b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=37f42ca8-3173-4de2-a075-4d45eeeb4e44 name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.117203245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cadef09c-8fa5-40f8-b90d-20c459027b44 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.117298281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cadef09c-8fa5-40f8-b90d-20c459027b44 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.117539355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:478fc06166ca9d5a0fa897447b1c3cbf34a89b888468cbbff8a560e92188739e,PodSandboxId:0d43161410b1b321ef6f46825adb659467dd2f97ba6bfd576488c86511ed4c4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689631082216356682,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-6gdxs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e18817a-c990-4093-bf22-29cc0b3ae94c,},Annotations:map[string]string{io.kubernetes.container.hash: c2e40a2c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919acf2780b5d657768cc4589922b0a1d548f646ea662efd44161e6f0745e793,PodSandboxId:40f54d305a919a783370458a0f9af8b28f03b8a6998c8c8826c5a8c1a853551f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630943924788276,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1573b30b-a311-4799-8a91-b4d776ee3681,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 68307758,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070b18c34481705b74aa0810461a94494473d2d55640c7f3845af7962a58e62c,PodSandboxId:86b2eda31ad7916f3635d3b26763fe250eccfac5155a247bbe83f08df6f4ac03,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689630929870693590,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6gxjc,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b2d5cb81-e581-4854-b8b9-15968ee13dd1,},Annotations:map[string]string{io.kubernetes.container.hash: 76a0d43d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3984a0d54eba9c094cc436c18e78e24e19c7f43490dacd51b0786ec5880b5611,PodSandboxId:02d6ccb8bb77f2e8bfc9287f535ca5b960152fc99f589c7304be5c6bb4d94f73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920817384282,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6vzwg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77cccfa9-d92c-4d67-9dc0-0ec74f7f643d,},Annotations:map[string]string{io.kubernetes.container.hash: cfa7504f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e84412beed17fd4ac212fd13eda68518d3bebb979a4836d3ab0e7cee140a3cc,PodSandboxId:98b79ea5b36ebda3d21b04a7173ebab0cfa29808800f4400448a5c5af82aae8f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920652387188,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k7xzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45435268-043a-48ed-bb6f-f9665ae3a030,},Annotations:map[string]string{io.kubernetes.container.hash: 913ff72f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb2c081912e9d5416cdef068ec2e91479ddb7dba2e279c6e5ccd982096af18c,PodSandboxId:9564a65054016d7c4fccb53ca698747747fea92d8afe2bc13223a54e35f68441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630881489945505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112e8637-b15d-420f-8887-85df1e33883e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a653dbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f150b170e27355813eced426bfbaaa9c33c6c671f014e911ad35bf864df25074,PodSandboxId:ee9280c9ac433a43a1c2b7c8a97ea0f29fbf2f530dd3271950f46f79f8ef2f4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689630879517933643,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s269q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6a31cf-4872-4d95-9655-8211e20b96ab,},Annotations:map[string]string{io.kubernetes.container.hash: 4da55662,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638bfa80f2c2676a83eb5d32c1e4481a66f127da758c51c35f98f1dabf17e4d7,PodSandboxId:9e525832bc36e67db213405f29ee44abb93680f6c4a8f0057f0d07ede9d35beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689630878897130502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-q6th7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97a38d0-ee22-4e40-ae86-0d3e4c577f08,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76aff9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941a8310f69d27b61e8c309cc05eec5cee840115a09e6ddef478af54cfbb690e,Pod
SandboxId:a0b0e50da156210cc5c971d818beeda2b7c1738b071eedb5eee93a315a35bbd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689630855170997423,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0adcee1a489d0b6560d986b235aec76,},Annotations:map[string]string{io.kubernetes.container.hash: a3cab507,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e1d4de0c5aa501f4f2722f3133b6d9a92bcaab7126759b1bf118b29e33d00cb,PodSandboxId:73af9a80a97bf005b752671977dd47d375f8
de4e9c55c880949a89ea70acddb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689630854104042803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a172b218a66d24ad799f620fef475aa1922dc36121035bab4a6fffa52f141b,PodSandboxId:aec7786dd4cf36c5270ccb4c2c407206468613d915
d7661ed813db276d9773c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689630853765986825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f42b6c6d1ddc2bd0830750725598fbd9e7111a8e988e49d48d8f2d0e5fa28b,PodSandboxId:4c70f0345e1f
1ee09ac8d04bd46dff875e4ea958f7a7e72451e60fb58d04c989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689630853658220211,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c2020d8598b1921e5361eeb5b9b77b,},Annotations:map[string]string{io.kubernetes.container.hash: 831e695b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cadef09c-8fa5-40f8-b90d-20c459027b44 name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.151315224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f8aa5c7f-57ac-41e8-af40-2d825a6adb83 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.151410115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f8aa5c7f-57ac-41e8-af40-2d825a6adb83 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.151663568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:478fc06166ca9d5a0fa897447b1c3cbf34a89b888468cbbff8a560e92188739e,PodSandboxId:0d43161410b1b321ef6f46825adb659467dd2f97ba6bfd576488c86511ed4c4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689631082216356682,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-6gdxs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e18817a-c990-4093-bf22-29cc0b3ae94c,},Annotations:map[string]string{io.kubernetes.container.hash: c2e40a2c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919acf2780b5d657768cc4589922b0a1d548f646ea662efd44161e6f0745e793,PodSandboxId:40f54d305a919a783370458a0f9af8b28f03b8a6998c8c8826c5a8c1a853551f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630943924788276,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1573b30b-a311-4799-8a91-b4d776ee3681,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 68307758,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070b18c34481705b74aa0810461a94494473d2d55640c7f3845af7962a58e62c,PodSandboxId:86b2eda31ad7916f3635d3b26763fe250eccfac5155a247bbe83f08df6f4ac03,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689630929870693590,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6gxjc,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b2d5cb81-e581-4854-b8b9-15968ee13dd1,},Annotations:map[string]string{io.kubernetes.container.hash: 76a0d43d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3984a0d54eba9c094cc436c18e78e24e19c7f43490dacd51b0786ec5880b5611,PodSandboxId:02d6ccb8bb77f2e8bfc9287f535ca5b960152fc99f589c7304be5c6bb4d94f73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920817384282,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6vzwg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77cccfa9-d92c-4d67-9dc0-0ec74f7f643d,},Annotations:map[string]string{io.kubernetes.container.hash: cfa7504f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e84412beed17fd4ac212fd13eda68518d3bebb979a4836d3ab0e7cee140a3cc,PodSandboxId:98b79ea5b36ebda3d21b04a7173ebab0cfa29808800f4400448a5c5af82aae8f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920652387188,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k7xzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45435268-043a-48ed-bb6f-f9665ae3a030,},Annotations:map[string]string{io.kubernetes.container.hash: 913ff72f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb2c081912e9d5416cdef068ec2e91479ddb7dba2e279c6e5ccd982096af18c,PodSandboxId:9564a65054016d7c4fccb53ca698747747fea92d8afe2bc13223a54e35f68441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630881489945505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112e8637-b15d-420f-8887-85df1e33883e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a653dbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f150b170e27355813eced426bfbaaa9c33c6c671f014e911ad35bf864df25074,PodSandboxId:ee9280c9ac433a43a1c2b7c8a97ea0f29fbf2f530dd3271950f46f79f8ef2f4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689630879517933643,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s269q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6a31cf-4872-4d95-9655-8211e20b96ab,},Annotations:map[string]string{io.kubernetes.container.hash: 4da55662,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638bfa80f2c2676a83eb5d32c1e4481a66f127da758c51c35f98f1dabf17e4d7,PodSandboxId:9e525832bc36e67db213405f29ee44abb93680f6c4a8f0057f0d07ede9d35beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689630878897130502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-q6th7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97a38d0-ee22-4e40-ae86-0d3e4c577f08,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76aff9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941a8310f69d27b61e8c309cc05eec5cee840115a09e6ddef478af54cfbb690e,Pod
SandboxId:a0b0e50da156210cc5c971d818beeda2b7c1738b071eedb5eee93a315a35bbd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689630855170997423,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0adcee1a489d0b6560d986b235aec76,},Annotations:map[string]string{io.kubernetes.container.hash: a3cab507,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e1d4de0c5aa501f4f2722f3133b6d9a92bcaab7126759b1bf118b29e33d00cb,PodSandboxId:73af9a80a97bf005b752671977dd47d375f8
de4e9c55c880949a89ea70acddb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689630854104042803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a172b218a66d24ad799f620fef475aa1922dc36121035bab4a6fffa52f141b,PodSandboxId:aec7786dd4cf36c5270ccb4c2c407206468613d915
d7661ed813db276d9773c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689630853765986825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f42b6c6d1ddc2bd0830750725598fbd9e7111a8e988e49d48d8f2d0e5fa28b,PodSandboxId:4c70f0345e1f
1ee09ac8d04bd46dff875e4ea958f7a7e72451e60fb58d04c989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689630853658220211,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c2020d8598b1921e5361eeb5b9b77b,},Annotations:map[string]string{io.kubernetes.container.hash: 831e695b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f8aa5c7f-57ac-41e8-af40-2d825a6adb83 name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.184623318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4f8a899c-d8c8-4408-adf2-a029ff8785f0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.184713804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4f8a899c-d8c8-4408-adf2-a029ff8785f0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.184985321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:478fc06166ca9d5a0fa897447b1c3cbf34a89b888468cbbff8a560e92188739e,PodSandboxId:0d43161410b1b321ef6f46825adb659467dd2f97ba6bfd576488c86511ed4c4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689631082216356682,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-6gdxs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e18817a-c990-4093-bf22-29cc0b3ae94c,},Annotations:map[string]string{io.kubernetes.container.hash: c2e40a2c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919acf2780b5d657768cc4589922b0a1d548f646ea662efd44161e6f0745e793,PodSandboxId:40f54d305a919a783370458a0f9af8b28f03b8a6998c8c8826c5a8c1a853551f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630943924788276,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1573b30b-a311-4799-8a91-b4d776ee3681,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 68307758,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070b18c34481705b74aa0810461a94494473d2d55640c7f3845af7962a58e62c,PodSandboxId:86b2eda31ad7916f3635d3b26763fe250eccfac5155a247bbe83f08df6f4ac03,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689630929870693590,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6gxjc,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b2d5cb81-e581-4854-b8b9-15968ee13dd1,},Annotations:map[string]string{io.kubernetes.container.hash: 76a0d43d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3984a0d54eba9c094cc436c18e78e24e19c7f43490dacd51b0786ec5880b5611,PodSandboxId:02d6ccb8bb77f2e8bfc9287f535ca5b960152fc99f589c7304be5c6bb4d94f73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920817384282,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6vzwg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77cccfa9-d92c-4d67-9dc0-0ec74f7f643d,},Annotations:map[string]string{io.kubernetes.container.hash: cfa7504f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e84412beed17fd4ac212fd13eda68518d3bebb979a4836d3ab0e7cee140a3cc,PodSandboxId:98b79ea5b36ebda3d21b04a7173ebab0cfa29808800f4400448a5c5af82aae8f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920652387188,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k7xzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45435268-043a-48ed-bb6f-f9665ae3a030,},Annotations:map[string]string{io.kubernetes.container.hash: 913ff72f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb2c081912e9d5416cdef068ec2e91479ddb7dba2e279c6e5ccd982096af18c,PodSandboxId:9564a65054016d7c4fccb53ca698747747fea92d8afe2bc13223a54e35f68441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630881489945505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112e8637-b15d-420f-8887-85df1e33883e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a653dbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f150b170e27355813eced426bfbaaa9c33c6c671f014e911ad35bf864df25074,PodSandboxId:ee9280c9ac433a43a1c2b7c8a97ea0f29fbf2f530dd3271950f46f79f8ef2f4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689630879517933643,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s269q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6a31cf-4872-4d95-9655-8211e20b96ab,},Annotations:map[string]string{io.kubernetes.container.hash: 4da55662,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638bfa80f2c2676a83eb5d32c1e4481a66f127da758c51c35f98f1dabf17e4d7,PodSandboxId:9e525832bc36e67db213405f29ee44abb93680f6c4a8f0057f0d07ede9d35beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689630878897130502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-q6th7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97a38d0-ee22-4e40-ae86-0d3e4c577f08,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76aff9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941a8310f69d27b61e8c309cc05eec5cee840115a09e6ddef478af54cfbb690e,Pod
SandboxId:a0b0e50da156210cc5c971d818beeda2b7c1738b071eedb5eee93a315a35bbd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689630855170997423,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0adcee1a489d0b6560d986b235aec76,},Annotations:map[string]string{io.kubernetes.container.hash: a3cab507,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e1d4de0c5aa501f4f2722f3133b6d9a92bcaab7126759b1bf118b29e33d00cb,PodSandboxId:73af9a80a97bf005b752671977dd47d375f8
de4e9c55c880949a89ea70acddb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689630854104042803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a172b218a66d24ad799f620fef475aa1922dc36121035bab4a6fffa52f141b,PodSandboxId:aec7786dd4cf36c5270ccb4c2c407206468613d915
d7661ed813db276d9773c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689630853765986825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f42b6c6d1ddc2bd0830750725598fbd9e7111a8e988e49d48d8f2d0e5fa28b,PodSandboxId:4c70f0345e1f
1ee09ac8d04bd46dff875e4ea958f7a7e72451e60fb58d04c989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689630853658220211,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c2020d8598b1921e5361eeb5b9b77b,},Annotations:map[string]string{io.kubernetes.container.hash: 831e695b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4f8a899c-d8c8-4408-adf2-a029ff8785f0 name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.220113433Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2f5c1d17-2fdf-46d4-8ca2-6617c8678b77 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.220231561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2f5c1d17-2fdf-46d4-8ca2-6617c8678b77 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.220557280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:478fc06166ca9d5a0fa897447b1c3cbf34a89b888468cbbff8a560e92188739e,PodSandboxId:0d43161410b1b321ef6f46825adb659467dd2f97ba6bfd576488c86511ed4c4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689631082216356682,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-6gdxs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e18817a-c990-4093-bf22-29cc0b3ae94c,},Annotations:map[string]string{io.kubernetes.container.hash: c2e40a2c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919acf2780b5d657768cc4589922b0a1d548f646ea662efd44161e6f0745e793,PodSandboxId:40f54d305a919a783370458a0f9af8b28f03b8a6998c8c8826c5a8c1a853551f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630943924788276,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1573b30b-a311-4799-8a91-b4d776ee3681,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 68307758,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070b18c34481705b74aa0810461a94494473d2d55640c7f3845af7962a58e62c,PodSandboxId:86b2eda31ad7916f3635d3b26763fe250eccfac5155a247bbe83f08df6f4ac03,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689630929870693590,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6gxjc,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b2d5cb81-e581-4854-b8b9-15968ee13dd1,},Annotations:map[string]string{io.kubernetes.container.hash: 76a0d43d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3984a0d54eba9c094cc436c18e78e24e19c7f43490dacd51b0786ec5880b5611,PodSandboxId:02d6ccb8bb77f2e8bfc9287f535ca5b960152fc99f589c7304be5c6bb4d94f73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920817384282,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6vzwg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77cccfa9-d92c-4d67-9dc0-0ec74f7f643d,},Annotations:map[string]string{io.kubernetes.container.hash: cfa7504f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e84412beed17fd4ac212fd13eda68518d3bebb979a4836d3ab0e7cee140a3cc,PodSandboxId:98b79ea5b36ebda3d21b04a7173ebab0cfa29808800f4400448a5c5af82aae8f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920652387188,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k7xzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45435268-043a-48ed-bb6f-f9665ae3a030,},Annotations:map[string]string{io.kubernetes.container.hash: 913ff72f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb2c081912e9d5416cdef068ec2e91479ddb7dba2e279c6e5ccd982096af18c,PodSandboxId:9564a65054016d7c4fccb53ca698747747fea92d8afe2bc13223a54e35f68441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630881489945505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112e8637-b15d-420f-8887-85df1e33883e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a653dbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f150b170e27355813eced426bfbaaa9c33c6c671f014e911ad35bf864df25074,PodSandboxId:ee9280c9ac433a43a1c2b7c8a97ea0f29fbf2f530dd3271950f46f79f8ef2f4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689630879517933643,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s269q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6a31cf-4872-4d95-9655-8211e20b96ab,},Annotations:map[string]string{io.kubernetes.container.hash: 4da55662,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638bfa80f2c2676a83eb5d32c1e4481a66f127da758c51c35f98f1dabf17e4d7,PodSandboxId:9e525832bc36e67db213405f29ee44abb93680f6c4a8f0057f0d07ede9d35beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689630878897130502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-q6th7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97a38d0-ee22-4e40-ae86-0d3e4c577f08,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76aff9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941a8310f69d27b61e8c309cc05eec5cee840115a09e6ddef478af54cfbb690e,Pod
SandboxId:a0b0e50da156210cc5c971d818beeda2b7c1738b071eedb5eee93a315a35bbd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689630855170997423,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0adcee1a489d0b6560d986b235aec76,},Annotations:map[string]string{io.kubernetes.container.hash: a3cab507,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e1d4de0c5aa501f4f2722f3133b6d9a92bcaab7126759b1bf118b29e33d00cb,PodSandboxId:73af9a80a97bf005b752671977dd47d375f8
de4e9c55c880949a89ea70acddb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689630854104042803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a172b218a66d24ad799f620fef475aa1922dc36121035bab4a6fffa52f141b,PodSandboxId:aec7786dd4cf36c5270ccb4c2c407206468613d915
d7661ed813db276d9773c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689630853765986825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f42b6c6d1ddc2bd0830750725598fbd9e7111a8e988e49d48d8f2d0e5fa28b,PodSandboxId:4c70f0345e1f
1ee09ac8d04bd46dff875e4ea958f7a7e72451e60fb58d04c989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689630853658220211,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c2020d8598b1921e5361eeb5b9b77b,},Annotations:map[string]string{io.kubernetes.container.hash: 831e695b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2f5c1d17-2fdf-46d4-8ca2-6617c8678b77 name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.253469061Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e7b3b805-2476-40c1-aea5-ed3e7209dfbf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.253565462Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e7b3b805-2476-40c1-aea5-ed3e7209dfbf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.253822418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:478fc06166ca9d5a0fa897447b1c3cbf34a89b888468cbbff8a560e92188739e,PodSandboxId:0d43161410b1b321ef6f46825adb659467dd2f97ba6bfd576488c86511ed4c4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689631082216356682,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-6gdxs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e18817a-c990-4093-bf22-29cc0b3ae94c,},Annotations:map[string]string{io.kubernetes.container.hash: c2e40a2c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919acf2780b5d657768cc4589922b0a1d548f646ea662efd44161e6f0745e793,PodSandboxId:40f54d305a919a783370458a0f9af8b28f03b8a6998c8c8826c5a8c1a853551f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630943924788276,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1573b30b-a311-4799-8a91-b4d776ee3681,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 68307758,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070b18c34481705b74aa0810461a94494473d2d55640c7f3845af7962a58e62c,PodSandboxId:86b2eda31ad7916f3635d3b26763fe250eccfac5155a247bbe83f08df6f4ac03,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689630929870693590,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6gxjc,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b2d5cb81-e581-4854-b8b9-15968ee13dd1,},Annotations:map[string]string{io.kubernetes.container.hash: 76a0d43d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3984a0d54eba9c094cc436c18e78e24e19c7f43490dacd51b0786ec5880b5611,PodSandboxId:02d6ccb8bb77f2e8bfc9287f535ca5b960152fc99f589c7304be5c6bb4d94f73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920817384282,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6vzwg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77cccfa9-d92c-4d67-9dc0-0ec74f7f643d,},Annotations:map[string]string{io.kubernetes.container.hash: cfa7504f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e84412beed17fd4ac212fd13eda68518d3bebb979a4836d3ab0e7cee140a3cc,PodSandboxId:98b79ea5b36ebda3d21b04a7173ebab0cfa29808800f4400448a5c5af82aae8f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920652387188,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k7xzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45435268-043a-48ed-bb6f-f9665ae3a030,},Annotations:map[string]string{io.kubernetes.container.hash: 913ff72f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb2c081912e9d5416cdef068ec2e91479ddb7dba2e279c6e5ccd982096af18c,PodSandboxId:9564a65054016d7c4fccb53ca698747747fea92d8afe2bc13223a54e35f68441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630881489945505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112e8637-b15d-420f-8887-85df1e33883e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a653dbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f150b170e27355813eced426bfbaaa9c33c6c671f014e911ad35bf864df25074,PodSandboxId:ee9280c9ac433a43a1c2b7c8a97ea0f29fbf2f530dd3271950f46f79f8ef2f4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689630879517933643,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s269q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6a31cf-4872-4d95-9655-8211e20b96ab,},Annotations:map[string]string{io.kubernetes.container.hash: 4da55662,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638bfa80f2c2676a83eb5d32c1e4481a66f127da758c51c35f98f1dabf17e4d7,PodSandboxId:9e525832bc36e67db213405f29ee44abb93680f6c4a8f0057f0d07ede9d35beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689630878897130502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-q6th7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97a38d0-ee22-4e40-ae86-0d3e4c577f08,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76aff9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941a8310f69d27b61e8c309cc05eec5cee840115a09e6ddef478af54cfbb690e,Pod
SandboxId:a0b0e50da156210cc5c971d818beeda2b7c1738b071eedb5eee93a315a35bbd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689630855170997423,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0adcee1a489d0b6560d986b235aec76,},Annotations:map[string]string{io.kubernetes.container.hash: a3cab507,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e1d4de0c5aa501f4f2722f3133b6d9a92bcaab7126759b1bf118b29e33d00cb,PodSandboxId:73af9a80a97bf005b752671977dd47d375f8
de4e9c55c880949a89ea70acddb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689630854104042803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a172b218a66d24ad799f620fef475aa1922dc36121035bab4a6fffa52f141b,PodSandboxId:aec7786dd4cf36c5270ccb4c2c407206468613d915
d7661ed813db276d9773c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689630853765986825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f42b6c6d1ddc2bd0830750725598fbd9e7111a8e988e49d48d8f2d0e5fa28b,PodSandboxId:4c70f0345e1f
1ee09ac8d04bd46dff875e4ea958f7a7e72451e60fb58d04c989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689630853658220211,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c2020d8598b1921e5361eeb5b9b77b,},Annotations:map[string]string{io.kubernetes.container.hash: 831e695b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e7b3b805-2476-40c1-aea5-ed3e7209dfbf name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.283865829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b8dcf3e5-7171-441b-8827-188e18817ae5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.283964925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b8dcf3e5-7171-441b-8827-188e18817ae5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 21:58:21 ingress-addon-legacy-480151 crio[722]: time="2023-07-17 21:58:21.284371065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:478fc06166ca9d5a0fa897447b1c3cbf34a89b888468cbbff8a560e92188739e,PodSandboxId:0d43161410b1b321ef6f46825adb659467dd2f97ba6bfd576488c86511ed4c4e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea,State:CONTAINER_RUNNING,CreatedAt:1689631082216356682,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-6gdxs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e18817a-c990-4093-bf22-29cc0b3ae94c,},Annotations:map[string]string{io.kubernetes.container.hash: c2e40a2c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919acf2780b5d657768cc4589922b0a1d548f646ea662efd44161e6f0745e793,PodSandboxId:40f54d305a919a783370458a0f9af8b28f03b8a6998c8c8826c5a8c1a853551f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6,State:CONTAINER_RUNNING,CreatedAt:1689630943924788276,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1573b30b-a311-4799-8a91-b4d776ee3681,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 68307758,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070b18c34481705b74aa0810461a94494473d2d55640c7f3845af7962a58e62c,PodSandboxId:86b2eda31ad7916f3635d3b26763fe250eccfac5155a247bbe83f08df6f4ac03,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1689630929870693590,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-6gxjc,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b2d5cb81-e581-4854-b8b9-15968ee13dd1,},Annotations:map[string]string{io.kubernetes.container.hash: 76a0d43d,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:3984a0d54eba9c094cc436c18e78e24e19c7f43490dacd51b0786ec5880b5611,PodSandboxId:02d6ccb8bb77f2e8bfc9287f535ca5b960152fc99f589c7304be5c6bb4d94f73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920817384282,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6vzwg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77cccfa9-d92c-4d67-9dc0-0ec74f7f643d,},Annotations:map[string]string{io.kubernetes.container.hash: cfa7504f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e84412beed17fd4ac212fd13eda68518d3bebb979a4836d3ab0e7cee140a3cc,PodSandboxId:98b79ea5b36ebda3d21b04a7173ebab0cfa29808800f4400448a5c5af82aae8f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1689630920652387188,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k7xzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45435268-043a-48ed-bb6f-f9665ae3a030,},Annotations:map[string]string{io.kubernetes.container.hash: 913ff72f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb2c081912e9d5416cdef068ec2e91479ddb7dba2e279c6e5ccd982096af18c,PodSandboxId:9564a65054016d7c4fccb53ca698747747fea92d8afe2bc13223a54e35f68441,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689630881489945505,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 112e8637-b15d-420f-8887-85df1e33883e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a653dbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f150b170e27355813eced426bfbaaa9c33c6c671f014e911ad35bf864df25074,PodSandboxId:ee9280c9ac433a43a1c2b7c8a97ea0f29fbf2f530dd3271950f46f79f8ef2f4c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1689630879517933643,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s269q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6a31cf-4872-4d95-9655-8211e20b96ab,},Annotations:map[string]string{io.kubernetes.container.hash: 4da55662,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:638bfa80f2c2676a83eb5d32c1e4481a66f127da758c51c35f98f1dabf17e4d7,PodSandboxId:9e525832bc36e67db213405f29ee44abb93680f6c4a8f0057f0d07ede9d35beb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1689630878897130502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-q6th7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c97a38d0-ee22-4e40-ae86-0d3e4c577f08,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76aff9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941a8310f69d27b61e8c309cc05eec5cee840115a09e6ddef478af54cfbb690e,Pod
SandboxId:a0b0e50da156210cc5c971d818beeda2b7c1738b071eedb5eee93a315a35bbd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1689630855170997423,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0adcee1a489d0b6560d986b235aec76,},Annotations:map[string]string{io.kubernetes.container.hash: a3cab507,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e1d4de0c5aa501f4f2722f3133b6d9a92bcaab7126759b1bf118b29e33d00cb,PodSandboxId:73af9a80a97bf005b752671977dd47d375f8
de4e9c55c880949a89ea70acddb2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1689630854104042803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a172b218a66d24ad799f620fef475aa1922dc36121035bab4a6fffa52f141b,PodSandboxId:aec7786dd4cf36c5270ccb4c2c407206468613d915
d7661ed813db276d9773c9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1689630853765986825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f42b6c6d1ddc2bd0830750725598fbd9e7111a8e988e49d48d8f2d0e5fa28b,PodSandboxId:4c70f0345e1f
1ee09ac8d04bd46dff875e4ea958f7a7e72451e60fb58d04c989,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1689630853658220211,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-480151,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5c2020d8598b1921e5361eeb5b9b77b,},Annotations:map[string]string{io.kubernetes.container.hash: 831e695b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b8dcf3e5-7171-441b-8827-188e18817ae5 name=/runtime.v1alpha2.Runt
imeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	478fc06166ca9       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea            19 seconds ago      Running             hello-world-app           0                   0d43161410b1b
	919acf2780b5d       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                    2 minutes ago       Running             nginx                     0                   40f54d305a919
	070b18c344817       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   86b2eda31ad79
	3984a0d54eba9       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   02d6ccb8bb77f
	2e84412beed17       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   98b79ea5b36eb
	6eb2c081912e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   9564a65054016
	f150b170e2735       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   ee9280c9ac433
	638bfa80f2c26       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   9e525832bc36e
	941a8310f69d2       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   a0b0e50da1562
	5e1d4de0c5aa5       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   73af9a80a97bf
	c3a172b218a66       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   aec7786dd4cf3
	94f42b6c6d1dd       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   4c70f0345e1f1
	
	* 
	* ==> coredns [638bfa80f2c2676a83eb5d32c1e4481a66f127da758c51c35f98f1dabf17e4d7] <==
	* [INFO] 10.244.0.6:50397 - 21687 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065644s
	[INFO] 10.244.0.6:50397 - 13425 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00015524s
	[INFO] 10.244.0.6:50397 - 45286 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067996s
	[INFO] 10.244.0.6:37436 - 25959 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109485s
	[INFO] 10.244.0.6:37436 - 2393 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000050063s
	[INFO] 10.244.0.6:50397 - 50064 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00010482s
	[INFO] 10.244.0.6:37436 - 43510 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004598s
	[INFO] 10.244.0.6:37436 - 32723 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00009591s
	[INFO] 10.244.0.6:37436 - 16831 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045537s
	[INFO] 10.244.0.6:37436 - 56822 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068819s
	[INFO] 10.244.0.6:37436 - 32063 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000153207s
	[INFO] 10.244.0.6:34715 - 54535 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000093179s
	[INFO] 10.244.0.6:59881 - 65367 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00008566s
	[INFO] 10.244.0.6:59881 - 59993 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000030782s
	[INFO] 10.244.0.6:34715 - 45876 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000026943s
	[INFO] 10.244.0.6:59881 - 25230 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000022082s
	[INFO] 10.244.0.6:34715 - 43495 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000017573s
	[INFO] 10.244.0.6:59881 - 62848 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000021577s
	[INFO] 10.244.0.6:34715 - 4775 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026731s
	[INFO] 10.244.0.6:59881 - 50617 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000021941s
	[INFO] 10.244.0.6:34715 - 59663 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00001729s
	[INFO] 10.244.0.6:59881 - 6699 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027354s
	[INFO] 10.244.0.6:34715 - 36477 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000016486s
	[INFO] 10.244.0.6:59881 - 63612 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000139144s
	[INFO] 10.244.0.6:34715 - 63297 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055719s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-480151
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-480151
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=ingress-addon-legacy-480151
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T21_54_22_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 21:54:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-480151
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 21:58:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 21:55:53 +0000   Mon, 17 Jul 2023 21:54:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 21:55:53 +0000   Mon, 17 Jul 2023 21:54:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 21:55:53 +0000   Mon, 17 Jul 2023 21:54:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 21:55:53 +0000   Mon, 17 Jul 2023 21:54:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    ingress-addon-legacy-480151
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 49928c60ec9948c0bafaed543c92622f
	  System UUID:                49928c60-ec99-48c0-bafa-ed543c92622f
	  Boot ID:                    2d5aec9d-70ad-465d-872d-2d62f791fd4f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-6gdxs                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 coredns-66bff467f8-q6th7                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m44s
	  kube-system                 etcd-ingress-addon-legacy-480151                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-apiserver-ingress-addon-legacy-480151             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-480151    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-proxy-s269q                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-480151             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m59s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m59s  kubelet     Node ingress-addon-legacy-480151 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s  kubelet     Node ingress-addon-legacy-480151 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s  kubelet     Node ingress-addon-legacy-480151 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m59s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m48s  kubelet     Node ingress-addon-legacy-480151 status is now: NodeReady
	  Normal  Starting                 3m42s  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jul17 21:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.100751] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.377916] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.399904] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.136433] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.058439] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 21:54] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.107023] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.137235] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.108616] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.215225] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[  +7.775041] systemd-fstab-generator[1034]: Ignoring "noauto" for root device
	[  +2.978869] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.829235] systemd-fstab-generator[1422]: Ignoring "noauto" for root device
	[ +16.000635] kauditd_printk_skb: 6 callbacks suppressed
	[Jul17 21:55] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.061362] kauditd_printk_skb: 14 callbacks suppressed
	[Jul17 21:57] kauditd_printk_skb: 5 callbacks suppressed
	[Jul17 21:58] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [941a8310f69d27b61e8c309cc05eec5cee840115a09e6ddef478af54cfbb690e] <==
	* 2023-07-17 21:54:15.313386 W | auth: simple token is not cryptographically signed
	2023-07-17 21:54:15.317812 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-17 21:54:15.320354 I | etcdserver: 97e52954629f162b as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-07-17 21:54:15.321805 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	raft2023/07/17 21:54:15 INFO: 97e52954629f162b switched to configuration voters=(10945199911802443307)
	2023-07-17 21:54:15.322559 I | etcdserver/membership: added member 97e52954629f162b [https://192.168.39.29:2380] to cluster f775b7b69fff5d11
	2023-07-17 21:54:15.322644 I | embed: listening for peers on 192.168.39.29:2380
	2023-07-17 21:54:15.323144 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/07/17 21:54:16 INFO: 97e52954629f162b is starting a new election at term 1
	raft2023/07/17 21:54:16 INFO: 97e52954629f162b became candidate at term 2
	raft2023/07/17 21:54:16 INFO: 97e52954629f162b received MsgVoteResp from 97e52954629f162b at term 2
	raft2023/07/17 21:54:16 INFO: 97e52954629f162b became leader at term 2
	raft2023/07/17 21:54:16 INFO: raft.node: 97e52954629f162b elected leader 97e52954629f162b at term 2
	2023-07-17 21:54:16.303783 I | etcdserver: published {Name:ingress-addon-legacy-480151 ClientURLs:[https://192.168.39.29:2379]} to cluster f775b7b69fff5d11
	2023-07-17 21:54:16.303823 I | embed: ready to serve client requests
	2023-07-17 21:54:16.304594 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-17 21:54:16.304899 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-17 21:54:16.304973 I | etcdserver/api: enabled capabilities for version 3.4
	2023-07-17 21:54:16.304999 I | embed: ready to serve client requests
	2023-07-17 21:54:16.305529 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-17 21:54:16.306213 I | embed: serving client requests on 192.168.39.29:2379
	2023-07-17 21:54:37.698970 W | etcdserver: request "header:<ID:1597521563355220253 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/ingress-addon-legacy-480151.1772c654afd4c131\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/ingress-addon-legacy-480151.1772c654afd4c131\" value_size:668 lease:1597521563355220036 >> failure:<>>" with result "size:16" took too long (449.546504ms) to execute
	2023-07-17 21:54:37.703446 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (447.621778ms) to execute
	2023-07-17 21:55:34.505267 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true " with result "range_response_count:0 size:5" took too long (373.324504ms) to execute
	2023-07-17 21:55:50.399298 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true " with result "range_response_count:0 size:7" took too long (336.537743ms) to execute
	
	* 
	* ==> kernel <==
	*  21:58:21 up 4 min,  0 users,  load average: 0.43, 0.40, 0.19
	Linux ingress-addon-legacy-480151 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [94f42b6c6d1ddc2bd0830750725598fbd9e7111a8e988e49d48d8f2d0e5fa28b] <==
	* I0717 21:54:19.231037       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0717 21:54:19.244889       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.29, ResourceVersion: 0, AdditionalErrorMsg: 
	I0717 21:54:19.331755       1 cache.go:39] Caches are synced for autoregister controller
	I0717 21:54:19.339228       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0717 21:54:19.339501       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 21:54:19.340577       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 21:54:19.340646       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0717 21:54:20.230532       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0717 21:54:20.230587       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 21:54:20.244417       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0717 21:54:20.249853       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0717 21:54:20.249893       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0717 21:54:20.728209       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 21:54:20.793842       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0717 21:54:20.888846       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.29]
	I0717 21:54:20.889772       1 controller.go:609] quota admission added evaluator for: endpoints
	I0717 21:54:20.893757       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 21:54:21.575824       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0717 21:54:22.325970       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0717 21:54:22.462886       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0717 21:54:23.046813       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 21:54:37.303389       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0717 21:54:37.321200       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0717 21:55:18.422324       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0717 21:55:41.173412       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [c3a172b218a66d24ad799f620fef475aa1922dc36121035bab4a6fffa52f141b] <==
	* I0717 21:54:37.382770       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 21:54:37.422320       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0717 21:54:37.444617       1 shared_informer.go:230] Caches are synced for attach detach 
	I0717 21:54:37.575548       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 21:54:37.575635       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0717 21:54:37.577442       1 shared_informer.go:230] Caches are synced for service account 
	I0717 21:54:37.617493       1 shared_informer.go:230] Caches are synced for namespace 
	I0717 21:54:37.624809       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 21:54:37.711179       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"dd764555-5a14-4280-adc6-53db68d713b9", APIVersion:"apps/v1", ResourceVersion:"200", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I0717 21:54:37.734287       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0717 21:54:37.734344       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 21:54:37.757754       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d8bf45a3-6a93-48fa-be52-6c9a158efb77", APIVersion:"apps/v1", ResourceVersion:"209", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-s269q
	I0717 21:54:37.757955       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"39603350-a569-40d6-9998-75cbb0cb39de", APIVersion:"apps/v1", ResourceVersion:"310", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-vfmgs
	I0717 21:54:37.782702       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"39603350-a569-40d6-9998-75cbb0cb39de", APIVersion:"apps/v1", ResourceVersion:"310", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-q6th7
	I0717 21:54:37.993771       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"dd764555-5a14-4280-adc6-53db68d713b9", APIVersion:"apps/v1", ResourceVersion:"348", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0717 21:54:38.021250       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"39603350-a569-40d6-9998-75cbb0cb39de", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-vfmgs
	I0717 21:55:18.404924       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"08ff488e-4f26-4f6d-a15e-433d49e8e838", APIVersion:"apps/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0717 21:55:18.448629       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"a4995bad-cf12-44c6-8e88-8409778b7363", APIVersion:"apps/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-6gxjc
	I0717 21:55:18.473690       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"85a0a082-dbfe-455a-bfbb-0e8644ad77f2", APIVersion:"batch/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-k7xzx
	I0717 21:55:18.550399       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"3c8d76ca-d719-475e-b369-a80ee1b2ff19", APIVersion:"batch/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-6vzwg
	I0717 21:55:21.184533       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"85a0a082-dbfe-455a-bfbb-0e8644ad77f2", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 21:55:21.207002       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"3c8d76ca-d719-475e-b369-a80ee1b2ff19", APIVersion:"batch/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 21:58:00.057820       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"9f44eefa-1c79-4d39-b091-babf75f37945", APIVersion:"apps/v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0717 21:58:00.072628       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"02c9ab72-3357-4855-8be1-a8836d92d3e0", APIVersion:"apps/v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-6gdxs
	E0717 21:58:18.474149       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-gm9sr" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [f150b170e27355813eced426bfbaaa9c33c6c671f014e911ad35bf864df25074] <==
	* W0717 21:54:39.705666       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0717 21:54:39.715236       1 node.go:136] Successfully retrieved node IP: 192.168.39.29
	I0717 21:54:39.715302       1 server_others.go:186] Using iptables Proxier.
	I0717 21:54:39.715637       1 server.go:583] Version: v1.18.20
	I0717 21:54:39.718332       1 config.go:315] Starting service config controller
	I0717 21:54:39.720598       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0717 21:54:39.721467       1 config.go:133] Starting endpoints config controller
	I0717 21:54:39.721676       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0717 21:54:39.821952       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0717 21:54:39.822360       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [5e1d4de0c5aa501f4f2722f3133b6d9a92bcaab7126759b1bf118b29e33d00cb] <==
	* I0717 21:54:19.332248       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0717 21:54:19.334566       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0717 21:54:19.337864       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 21:54:19.350190       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 21:54:19.338479       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0717 21:54:19.342574       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 21:54:19.358290       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 21:54:19.358621       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 21:54:19.358856       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 21:54:19.359029       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 21:54:19.359250       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 21:54:19.359405       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 21:54:19.359595       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 21:54:19.359746       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 21:54:19.359935       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 21:54:19.363289       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 21:54:19.363543       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 21:54:20.215846       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 21:54:20.325630       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 21:54:20.403222       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 21:54:20.410043       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 21:54:20.448252       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 21:54:20.480394       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 21:54:20.542733       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0717 21:54:20.951645       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 21:53:45 UTC, ends at Mon 2023-07-17 21:58:21 UTC. --
	Jul 17 21:55:22 ingress-addon-legacy-480151 kubelet[1429]: W0717 21:55:22.170597    1429 pod_container_deletor.go:77] Container "98b79ea5b36ebda3d21b04a7173ebab0cfa29808800f4400448a5c5af82aae8f" not found in pod's containers
	Jul 17 21:55:22 ingress-addon-legacy-480151 kubelet[1429]: W0717 21:55:22.173379    1429 pod_container_deletor.go:77] Container "02d6ccb8bb77f2e8bfc9287f535ca5b960152fc99f589c7304be5c6bb4d94f73" not found in pod's containers
	Jul 17 21:55:31 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:55:31.733355    1429 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jul 17 21:55:31 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:55:31.899408    1429 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-jxlnm" (UniqueName: "kubernetes.io/secret/3a458c85-ceb0-4cde-b77d-bc3e46018dd2-minikube-ingress-dns-token-jxlnm") pod "kube-ingress-dns-minikube" (UID: "3a458c85-ceb0-4cde-b77d-bc3e46018dd2")
	Jul 17 21:55:41 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:55:41.354509    1429 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jul 17 21:55:41 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:55:41.530962    1429 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-58v5d" (UniqueName: "kubernetes.io/secret/1573b30b-a311-4799-8a91-b4d776ee3681-default-token-58v5d") pod "nginx" (UID: "1573b30b-a311-4799-8a91-b4d776ee3681")
	Jul 17 21:58:00 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:00.085467    1429 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jul 17 21:58:00 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:00.199628    1429 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-58v5d" (UniqueName: "kubernetes.io/secret/8e18817a-c990-4093-bf22-29cc0b3ae94c-default-token-58v5d") pod "hello-world-app-5f5d8b66bb-6gdxs" (UID: "8e18817a-c990-4093-bf22-29cc0b3ae94c")
	Jul 17 21:58:01 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:01.543969    1429 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ca5b20e0158c8dae0a75d808cea08a633a326af8161953910bb3ed13bbde75b3
	Jul 17 21:58:01 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:01.770474    1429 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ca5b20e0158c8dae0a75d808cea08a633a326af8161953910bb3ed13bbde75b3
	Jul 17 21:58:01 ingress-addon-legacy-480151 kubelet[1429]: E0717 21:58:01.771036    1429 remote_runtime.go:295] ContainerStatus "ca5b20e0158c8dae0a75d808cea08a633a326af8161953910bb3ed13bbde75b3" from runtime service failed: rpc error: code = NotFound desc = could not find container "ca5b20e0158c8dae0a75d808cea08a633a326af8161953910bb3ed13bbde75b3": container with ID starting with ca5b20e0158c8dae0a75d808cea08a633a326af8161953910bb3ed13bbde75b3 not found: ID does not exist
	Jul 17 21:58:02 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:02.609823    1429 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-jxlnm" (UniqueName: "kubernetes.io/secret/3a458c85-ceb0-4cde-b77d-bc3e46018dd2-minikube-ingress-dns-token-jxlnm") pod "3a458c85-ceb0-4cde-b77d-bc3e46018dd2" (UID: "3a458c85-ceb0-4cde-b77d-bc3e46018dd2")
	Jul 17 21:58:02 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:02.618176    1429 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a458c85-ceb0-4cde-b77d-bc3e46018dd2-minikube-ingress-dns-token-jxlnm" (OuterVolumeSpecName: "minikube-ingress-dns-token-jxlnm") pod "3a458c85-ceb0-4cde-b77d-bc3e46018dd2" (UID: "3a458c85-ceb0-4cde-b77d-bc3e46018dd2"). InnerVolumeSpecName "minikube-ingress-dns-token-jxlnm". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 21:58:02 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:02.710297    1429 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-jxlnm" (UniqueName: "kubernetes.io/secret/3a458c85-ceb0-4cde-b77d-bc3e46018dd2-minikube-ingress-dns-token-jxlnm") on node "ingress-addon-legacy-480151" DevicePath ""
	Jul 17 21:58:02 ingress-addon-legacy-480151 kubelet[1429]: E0717 21:58:02.958713    1429 kubelet_pods.go:1235] Failed killing the pod "kube-ingress-dns-minikube": failed to "KillContainer" for "minikube-ingress-dns" with KillContainerError: "rpc error: code = NotFound desc = could not find container \"ca5b20e0158c8dae0a75d808cea08a633a326af8161953910bb3ed13bbde75b3\": container with ID starting with ca5b20e0158c8dae0a75d808cea08a633a326af8161953910bb3ed13bbde75b3 not found: ID does not exist"
	Jul 17 21:58:13 ingress-addon-legacy-480151 kubelet[1429]: E0717 21:58:13.839790    1429 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-6gxjc.1772c6871d9bd5c5", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-6gxjc", UID:"b2d5cb81-e581-4854-b8b9-15968ee13dd1", APIVersion:"v1", ResourceVersion:"469", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-480151"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1258c3d71e383c5, ext:231560310618, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1258c3d71e383c5, ext:231560310618, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-6gxjc.1772c6871d9bd5c5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 21:58:13 ingress-addon-legacy-480151 kubelet[1429]: E0717 21:58:13.870271    1429 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-6gxjc.1772c6871d9bd5c5", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-6gxjc", UID:"b2d5cb81-e581-4854-b8b9-15968ee13dd1", APIVersion:"v1", ResourceVersion:"469", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-480151"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1258c3d71e383c5, ext:231560310618, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1258c3d736686e3, ext:231585673848, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-6gxjc.1772c6871d9bd5c5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 21:58:16 ingress-addon-legacy-480151 kubelet[1429]: W0717 21:58:16.600927    1429 pod_container_deletor.go:77] Container "86b2eda31ad7916f3635d3b26763fe250eccfac5155a247bbe83f08df6f4ac03" not found in pod's containers
	Jul 17 21:58:17 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:17.960199    1429 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-lfd7q" (UniqueName: "kubernetes.io/secret/b2d5cb81-e581-4854-b8b9-15968ee13dd1-ingress-nginx-token-lfd7q") pod "b2d5cb81-e581-4854-b8b9-15968ee13dd1" (UID: "b2d5cb81-e581-4854-b8b9-15968ee13dd1")
	Jul 17 21:58:17 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:17.960248    1429 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b2d5cb81-e581-4854-b8b9-15968ee13dd1-webhook-cert") pod "b2d5cb81-e581-4854-b8b9-15968ee13dd1" (UID: "b2d5cb81-e581-4854-b8b9-15968ee13dd1")
	Jul 17 21:58:17 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:17.963822    1429 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2d5cb81-e581-4854-b8b9-15968ee13dd1-ingress-nginx-token-lfd7q" (OuterVolumeSpecName: "ingress-nginx-token-lfd7q") pod "b2d5cb81-e581-4854-b8b9-15968ee13dd1" (UID: "b2d5cb81-e581-4854-b8b9-15968ee13dd1"). InnerVolumeSpecName "ingress-nginx-token-lfd7q". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 21:58:17 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:17.964234    1429 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2d5cb81-e581-4854-b8b9-15968ee13dd1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b2d5cb81-e581-4854-b8b9-15968ee13dd1" (UID: "b2d5cb81-e581-4854-b8b9-15968ee13dd1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 21:58:18 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:18.060668    1429 reconciler.go:319] Volume detached for volume "ingress-nginx-token-lfd7q" (UniqueName: "kubernetes.io/secret/b2d5cb81-e581-4854-b8b9-15968ee13dd1-ingress-nginx-token-lfd7q") on node "ingress-addon-legacy-480151" DevicePath ""
	Jul 17 21:58:18 ingress-addon-legacy-480151 kubelet[1429]: I0717 21:58:18.060716    1429 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b2d5cb81-e581-4854-b8b9-15968ee13dd1-webhook-cert") on node "ingress-addon-legacy-480151" DevicePath ""
	Jul 17 21:58:18 ingress-addon-legacy-480151 kubelet[1429]: W0717 21:58:18.958143    1429 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/b2d5cb81-e581-4854-b8b9-15968ee13dd1/volumes" does not exist
	
	* 
	* ==> storage-provisioner [6eb2c081912e9d5416cdef068ec2e91479ddb7dba2e279c6e5ccd982096af18c] <==
	* I0717 21:54:41.615580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 21:54:41.625653       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 21:54:41.625716       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 21:54:41.633682       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"855e1dd5-6e1b-4373-9a27-6fb698183010", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-480151_a86ba206-6b1a-4422-87e6-a1daf891c90c became leader
	I0717 21:54:41.633740       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 21:54:41.633870       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-480151_a86ba206-6b1a-4422-87e6-a1daf891c90c!
	I0717 21:54:41.734847       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-480151_a86ba206-6b1a-4422-87e6-a1daf891c90c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-480151 -n ingress-addon-legacy-480151
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-480151 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (170.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- exec busybox-67b7f59bb-58859 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- exec busybox-67b7f59bb-58859 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-009530 -- exec busybox-67b7f59bb-58859 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (178.320898ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-67b7f59bb-58859): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- exec busybox-67b7f59bb-p72ln -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- exec busybox-67b7f59bb-p72ln -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-009530 -- exec busybox-67b7f59bb-p72ln -- sh -c "ping -c 1 192.168.39.1": exit status 1 (180.482605ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-67b7f59bb-p72ln): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-009530 -n multinode-009530
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-009530 logs -n 25: (1.239481481s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-876666 ssh -- ls                    | mount-start-2-876666 | jenkins | v1.31.0 | 17 Jul 23 22:02 UTC | 17 Jul 23 22:02 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-876666 ssh --                       | mount-start-2-876666 | jenkins | v1.31.0 | 17 Jul 23 22:02 UTC | 17 Jul 23 22:02 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-876666                           | mount-start-2-876666 | jenkins | v1.31.0 | 17 Jul 23 22:02 UTC | 17 Jul 23 22:02 UTC |
	| start   | -p mount-start-2-876666                           | mount-start-2-876666 | jenkins | v1.31.0 | 17 Jul 23 22:02 UTC | 17 Jul 23 22:03 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-876666 | jenkins | v1.31.0 | 17 Jul 23 22:03 UTC |                     |
	|         | --profile mount-start-2-876666                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-876666 ssh -- ls                    | mount-start-2-876666 | jenkins | v1.31.0 | 17 Jul 23 22:03 UTC | 17 Jul 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-876666 ssh --                       | mount-start-2-876666 | jenkins | v1.31.0 | 17 Jul 23 22:03 UTC | 17 Jul 23 22:03 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-876666                           | mount-start-2-876666 | jenkins | v1.31.0 | 17 Jul 23 22:03 UTC | 17 Jul 23 22:03 UTC |
	| delete  | -p mount-start-1-853079                           | mount-start-1-853079 | jenkins | v1.31.0 | 17 Jul 23 22:03 UTC | 17 Jul 23 22:03 UTC |
	| start   | -p multinode-009530                               | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:03 UTC | 17 Jul 23 22:04 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- apply -f                   | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- rollout                    | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- get pods -o                | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- get pods -o                | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- exec                       | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | busybox-67b7f59bb-58859 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- exec                       | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | busybox-67b7f59bb-p72ln --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- exec                       | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | busybox-67b7f59bb-58859 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- exec                       | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | busybox-67b7f59bb-p72ln --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- exec                       | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | busybox-67b7f59bb-58859 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- exec                       | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | busybox-67b7f59bb-p72ln -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- get pods -o                | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- exec                       | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | busybox-67b7f59bb-58859                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- exec                       | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC |                     |
	|         | busybox-67b7f59bb-58859 -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- exec                       | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC | 17 Jul 23 22:04 UTC |
	|         | busybox-67b7f59bb-p72ln                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-009530 -- exec                       | multinode-009530     | jenkins | v1.31.0 | 17 Jul 23 22:04 UTC |                     |
	|         | busybox-67b7f59bb-p72ln -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:03:07
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:03:07.870915   34695 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:03:07.871068   34695 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:03:07.871079   34695 out.go:309] Setting ErrFile to fd 2...
	I0717 22:03:07.871083   34695 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:03:07.871295   34695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:03:07.871895   34695 out.go:303] Setting JSON to false
	I0717 22:03:07.872772   34695 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6340,"bootTime":1689625048,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:03:07.872827   34695 start.go:138] virtualization: kvm guest
	I0717 22:03:07.875214   34695 out.go:177] * [multinode-009530] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:03:07.876664   34695 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:03:07.876684   34695 notify.go:220] Checking for updates...
	I0717 22:03:07.878190   34695 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:03:07.879867   34695 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:03:07.881270   34695 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:03:07.882739   34695 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:03:07.884258   34695 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:03:07.885828   34695 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:03:07.924329   34695 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 22:03:07.925818   34695 start.go:298] selected driver: kvm2
	I0717 22:03:07.925835   34695 start.go:880] validating driver "kvm2" against <nil>
	I0717 22:03:07.925846   34695 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:03:07.926513   34695 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:03:07.926598   34695 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 22:03:07.940348   34695 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 22:03:07.940387   34695 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 22:03:07.940592   34695 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 22:03:07.940632   34695 cni.go:84] Creating CNI manager for ""
	I0717 22:03:07.940638   34695 cni.go:137] 0 nodes found, recommending kindnet
	I0717 22:03:07.940651   34695 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 22:03:07.940661   34695 start_flags.go:319] config:
	{Name:multinode-009530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-009530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugi
n:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:03:07.940826   34695 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:03:07.943395   34695 out.go:177] * Starting control plane node multinode-009530 in cluster multinode-009530
	I0717 22:03:07.944575   34695 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:03:07.944616   34695 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 22:03:07.944630   34695 cache.go:57] Caching tarball of preloaded images
	I0717 22:03:07.944717   34695 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 22:03:07.944730   34695 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 22:03:07.945071   34695 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/config.json ...
	I0717 22:03:07.945094   34695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/config.json: {Name:mka3698ee0500674f3cff10d1eaa1c8fba46fee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:03:07.945238   34695 start.go:365] acquiring machines lock for multinode-009530: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:03:07.945275   34695 start.go:369] acquired machines lock for "multinode-009530" in 20.411µs
	I0717 22:03:07.945302   34695 start.go:93] Provisioning new machine with config: &{Name:multinode-009530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-0
09530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:03:07.945375   34695 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 22:03:07.947011   34695 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 22:03:07.947144   34695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:03:07.947193   34695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:03:07.960725   34695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I0717 22:03:07.961100   34695 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:03:07.961657   34695 main.go:141] libmachine: Using API Version  1
	I0717 22:03:07.961678   34695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:03:07.962053   34695 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:03:07.962249   34695 main.go:141] libmachine: (multinode-009530) Calling .GetMachineName
	I0717 22:03:07.962406   34695 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:03:07.962539   34695 start.go:159] libmachine.API.Create for "multinode-009530" (driver="kvm2")
	I0717 22:03:07.962586   34695 client.go:168] LocalClient.Create starting
	I0717 22:03:07.962616   34695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem
	I0717 22:03:07.962650   34695 main.go:141] libmachine: Decoding PEM data...
	I0717 22:03:07.962667   34695 main.go:141] libmachine: Parsing certificate...
	I0717 22:03:07.962716   34695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem
	I0717 22:03:07.962743   34695 main.go:141] libmachine: Decoding PEM data...
	I0717 22:03:07.962759   34695 main.go:141] libmachine: Parsing certificate...
	I0717 22:03:07.962785   34695 main.go:141] libmachine: Running pre-create checks...
	I0717 22:03:07.962795   34695 main.go:141] libmachine: (multinode-009530) Calling .PreCreateCheck
	I0717 22:03:07.963153   34695 main.go:141] libmachine: (multinode-009530) Calling .GetConfigRaw
	I0717 22:03:07.963482   34695 main.go:141] libmachine: Creating machine...
	I0717 22:03:07.963496   34695 main.go:141] libmachine: (multinode-009530) Calling .Create
	I0717 22:03:07.963641   34695 main.go:141] libmachine: (multinode-009530) Creating KVM machine...
	I0717 22:03:07.964866   34695 main.go:141] libmachine: (multinode-009530) DBG | found existing default KVM network
	I0717 22:03:07.965551   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:07.965411   34718 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000298a0}
	I0717 22:03:07.970412   34695 main.go:141] libmachine: (multinode-009530) DBG | trying to create private KVM network mk-multinode-009530 192.168.39.0/24...
	I0717 22:03:08.042277   34695 main.go:141] libmachine: (multinode-009530) DBG | private KVM network mk-multinode-009530 192.168.39.0/24 created
	I0717 22:03:08.042321   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:08.042209   34718 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:03:08.042336   34695 main.go:141] libmachine: (multinode-009530) Setting up store path in /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530 ...
	I0717 22:03:08.042354   34695 main.go:141] libmachine: (multinode-009530) Building disk image from file:///home/jenkins/minikube-integration/16899-15759/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 22:03:08.042373   34695 main.go:141] libmachine: (multinode-009530) Downloading /home/jenkins/minikube-integration/16899-15759/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16899-15759/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 22:03:08.241173   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:08.241005   34718 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa...
	I0717 22:03:08.342307   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:08.342176   34718 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/multinode-009530.rawdisk...
	I0717 22:03:08.342338   34695 main.go:141] libmachine: (multinode-009530) DBG | Writing magic tar header
	I0717 22:03:08.342350   34695 main.go:141] libmachine: (multinode-009530) DBG | Writing SSH key tar header
	I0717 22:03:08.342362   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:08.342292   34718 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530 ...
	I0717 22:03:08.342375   34695 main.go:141] libmachine: (multinode-009530) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530
	I0717 22:03:08.342441   34695 main.go:141] libmachine: (multinode-009530) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530 (perms=drwx------)
	I0717 22:03:08.342466   34695 main.go:141] libmachine: (multinode-009530) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube/machines
	I0717 22:03:08.342477   34695 main.go:141] libmachine: (multinode-009530) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube/machines (perms=drwxr-xr-x)
	I0717 22:03:08.342488   34695 main.go:141] libmachine: (multinode-009530) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:03:08.342495   34695 main.go:141] libmachine: (multinode-009530) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759
	I0717 22:03:08.342504   34695 main.go:141] libmachine: (multinode-009530) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube (perms=drwxr-xr-x)
	I0717 22:03:08.342514   34695 main.go:141] libmachine: (multinode-009530) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759 (perms=drwxrwxr-x)
	I0717 22:03:08.342524   34695 main.go:141] libmachine: (multinode-009530) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 22:03:08.342532   34695 main.go:141] libmachine: (multinode-009530) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 22:03:08.342538   34695 main.go:141] libmachine: (multinode-009530) Creating domain...
	I0717 22:03:08.342547   34695 main.go:141] libmachine: (multinode-009530) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 22:03:08.342553   34695 main.go:141] libmachine: (multinode-009530) DBG | Checking permissions on dir: /home/jenkins
	I0717 22:03:08.342581   34695 main.go:141] libmachine: (multinode-009530) DBG | Checking permissions on dir: /home
	I0717 22:03:08.342601   34695 main.go:141] libmachine: (multinode-009530) DBG | Skipping /home - not owner
	I0717 22:03:08.343645   34695 main.go:141] libmachine: (multinode-009530) define libvirt domain using xml: 
	I0717 22:03:08.343668   34695 main.go:141] libmachine: (multinode-009530) <domain type='kvm'>
	I0717 22:03:08.343678   34695 main.go:141] libmachine: (multinode-009530)   <name>multinode-009530</name>
	I0717 22:03:08.343686   34695 main.go:141] libmachine: (multinode-009530)   <memory unit='MiB'>2200</memory>
	I0717 22:03:08.343701   34695 main.go:141] libmachine: (multinode-009530)   <vcpu>2</vcpu>
	I0717 22:03:08.343719   34695 main.go:141] libmachine: (multinode-009530)   <features>
	I0717 22:03:08.343730   34695 main.go:141] libmachine: (multinode-009530)     <acpi/>
	I0717 22:03:08.343746   34695 main.go:141] libmachine: (multinode-009530)     <apic/>
	I0717 22:03:08.343755   34695 main.go:141] libmachine: (multinode-009530)     <pae/>
	I0717 22:03:08.343762   34695 main.go:141] libmachine: (multinode-009530)     
	I0717 22:03:08.343782   34695 main.go:141] libmachine: (multinode-009530)   </features>
	I0717 22:03:08.343800   34695 main.go:141] libmachine: (multinode-009530)   <cpu mode='host-passthrough'>
	I0717 22:03:08.343811   34695 main.go:141] libmachine: (multinode-009530)   
	I0717 22:03:08.343824   34695 main.go:141] libmachine: (multinode-009530)   </cpu>
	I0717 22:03:08.343837   34695 main.go:141] libmachine: (multinode-009530)   <os>
	I0717 22:03:08.343856   34695 main.go:141] libmachine: (multinode-009530)     <type>hvm</type>
	I0717 22:03:08.343869   34695 main.go:141] libmachine: (multinode-009530)     <boot dev='cdrom'/>
	I0717 22:03:08.343882   34695 main.go:141] libmachine: (multinode-009530)     <boot dev='hd'/>
	I0717 22:03:08.343894   34695 main.go:141] libmachine: (multinode-009530)     <bootmenu enable='no'/>
	I0717 22:03:08.343922   34695 main.go:141] libmachine: (multinode-009530)   </os>
	I0717 22:03:08.343947   34695 main.go:141] libmachine: (multinode-009530)   <devices>
	I0717 22:03:08.343963   34695 main.go:141] libmachine: (multinode-009530)     <disk type='file' device='cdrom'>
	I0717 22:03:08.343974   34695 main.go:141] libmachine: (multinode-009530)       <source file='/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/boot2docker.iso'/>
	I0717 22:03:08.343985   34695 main.go:141] libmachine: (multinode-009530)       <target dev='hdc' bus='scsi'/>
	I0717 22:03:08.344007   34695 main.go:141] libmachine: (multinode-009530)       <readonly/>
	I0717 22:03:08.344022   34695 main.go:141] libmachine: (multinode-009530)     </disk>
	I0717 22:03:08.344040   34695 main.go:141] libmachine: (multinode-009530)     <disk type='file' device='disk'>
	I0717 22:03:08.344056   34695 main.go:141] libmachine: (multinode-009530)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 22:03:08.344073   34695 main.go:141] libmachine: (multinode-009530)       <source file='/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/multinode-009530.rawdisk'/>
	I0717 22:03:08.344087   34695 main.go:141] libmachine: (multinode-009530)       <target dev='hda' bus='virtio'/>
	I0717 22:03:08.344095   34695 main.go:141] libmachine: (multinode-009530)     </disk>
	I0717 22:03:08.344108   34695 main.go:141] libmachine: (multinode-009530)     <interface type='network'>
	I0717 22:03:08.344125   34695 main.go:141] libmachine: (multinode-009530)       <source network='mk-multinode-009530'/>
	I0717 22:03:08.344140   34695 main.go:141] libmachine: (multinode-009530)       <model type='virtio'/>
	I0717 22:03:08.344152   34695 main.go:141] libmachine: (multinode-009530)     </interface>
	I0717 22:03:08.344166   34695 main.go:141] libmachine: (multinode-009530)     <interface type='network'>
	I0717 22:03:08.344178   34695 main.go:141] libmachine: (multinode-009530)       <source network='default'/>
	I0717 22:03:08.344189   34695 main.go:141] libmachine: (multinode-009530)       <model type='virtio'/>
	I0717 22:03:08.344204   34695 main.go:141] libmachine: (multinode-009530)     </interface>
	I0717 22:03:08.344236   34695 main.go:141] libmachine: (multinode-009530)     <serial type='pty'>
	I0717 22:03:08.344261   34695 main.go:141] libmachine: (multinode-009530)       <target port='0'/>
	I0717 22:03:08.344285   34695 main.go:141] libmachine: (multinode-009530)     </serial>
	I0717 22:03:08.344306   34695 main.go:141] libmachine: (multinode-009530)     <console type='pty'>
	I0717 22:03:08.344323   34695 main.go:141] libmachine: (multinode-009530)       <target type='serial' port='0'/>
	I0717 22:03:08.344341   34695 main.go:141] libmachine: (multinode-009530)     </console>
	I0717 22:03:08.344354   34695 main.go:141] libmachine: (multinode-009530)     <rng model='virtio'>
	I0717 22:03:08.344371   34695 main.go:141] libmachine: (multinode-009530)       <backend model='random'>/dev/random</backend>
	I0717 22:03:08.344384   34695 main.go:141] libmachine: (multinode-009530)     </rng>
	I0717 22:03:08.344393   34695 main.go:141] libmachine: (multinode-009530)     
	I0717 22:03:08.344404   34695 main.go:141] libmachine: (multinode-009530)     
	I0717 22:03:08.344424   34695 main.go:141] libmachine: (multinode-009530)   </devices>
	I0717 22:03:08.344438   34695 main.go:141] libmachine: (multinode-009530) </domain>
	I0717 22:03:08.344450   34695 main.go:141] libmachine: (multinode-009530) 
	I0717 22:03:08.348483   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:a2:5c:99 in network default
	I0717 22:03:08.349009   34695 main.go:141] libmachine: (multinode-009530) Ensuring networks are active...
	I0717 22:03:08.349047   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:08.349733   34695 main.go:141] libmachine: (multinode-009530) Ensuring network default is active
	I0717 22:03:08.349976   34695 main.go:141] libmachine: (multinode-009530) Ensuring network mk-multinode-009530 is active
	I0717 22:03:08.350414   34695 main.go:141] libmachine: (multinode-009530) Getting domain xml...
	I0717 22:03:08.351155   34695 main.go:141] libmachine: (multinode-009530) Creating domain...
	I0717 22:03:08.690584   34695 main.go:141] libmachine: (multinode-009530) Waiting to get IP...
	I0717 22:03:08.691455   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:08.691864   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:08.691903   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:08.691861   34718 retry.go:31] will retry after 265.171785ms: waiting for machine to come up
	I0717 22:03:08.958227   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:08.958608   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:08.958631   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:08.958584   34718 retry.go:31] will retry after 354.127065ms: waiting for machine to come up
	I0717 22:03:09.313887   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:09.314318   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:09.314343   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:09.314291   34718 retry.go:31] will retry after 405.197228ms: waiting for machine to come up
	I0717 22:03:09.720919   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:09.721419   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:09.721447   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:09.721368   34718 retry.go:31] will retry after 609.264757ms: waiting for machine to come up
	I0717 22:03:10.331698   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:10.332210   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:10.332232   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:10.332173   34718 retry.go:31] will retry after 529.009826ms: waiting for machine to come up
	I0717 22:03:10.862971   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:10.863467   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:10.863499   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:10.863410   34718 retry.go:31] will retry after 879.97711ms: waiting for machine to come up
	I0717 22:03:11.744634   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:11.745103   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:11.745122   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:11.745067   34718 retry.go:31] will retry after 1.044333113s: waiting for machine to come up
	I0717 22:03:12.791294   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:12.791763   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:12.791803   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:12.791727   34718 retry.go:31] will retry after 1.440860822s: waiting for machine to come up
	I0717 22:03:14.235002   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:14.235613   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:14.235655   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:14.235508   34718 retry.go:31] will retry after 1.683405631s: waiting for machine to come up
	I0717 22:03:15.921481   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:15.921935   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:15.921965   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:15.921887   34718 retry.go:31] will retry after 2.158475949s: waiting for machine to come up
	I0717 22:03:18.082275   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:18.082707   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:18.082740   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:18.082648   34718 retry.go:31] will retry after 1.766448912s: waiting for machine to come up
	I0717 22:03:19.851809   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:19.852309   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:19.852335   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:19.852276   34718 retry.go:31] will retry after 3.165746037s: waiting for machine to come up
	I0717 22:03:23.020149   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:23.020515   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:23.020534   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:23.020484   34718 retry.go:31] will retry after 4.255016862s: waiting for machine to come up
	I0717 22:03:27.279779   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:27.280326   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:03:27.280345   34695 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:03:27.280273   34718 retry.go:31] will retry after 3.819988462s: waiting for machine to come up
	I0717 22:03:31.102607   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.103139   34695 main.go:141] libmachine: (multinode-009530) Found IP for machine: 192.168.39.222
	I0717 22:03:31.103170   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has current primary IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.103180   34695 main.go:141] libmachine: (multinode-009530) Reserving static IP address...
	I0717 22:03:31.103578   34695 main.go:141] libmachine: (multinode-009530) DBG | unable to find host DHCP lease matching {name: "multinode-009530", mac: "52:54:00:64:61:2c", ip: "192.168.39.222"} in network mk-multinode-009530
	I0717 22:03:31.178332   34695 main.go:141] libmachine: (multinode-009530) DBG | Getting to WaitForSSH function...
	I0717 22:03:31.178383   34695 main.go:141] libmachine: (multinode-009530) Reserved static IP address: 192.168.39.222
	I0717 22:03:31.178423   34695 main.go:141] libmachine: (multinode-009530) Waiting for SSH to be available...
	I0717 22:03:31.182497   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.182982   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:31.183023   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.183099   34695 main.go:141] libmachine: (multinode-009530) DBG | Using SSH client type: external
	I0717 22:03:31.183134   34695 main.go:141] libmachine: (multinode-009530) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa (-rw-------)
	I0717 22:03:31.183171   34695 main.go:141] libmachine: (multinode-009530) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:03:31.183189   34695 main.go:141] libmachine: (multinode-009530) DBG | About to run SSH command:
	I0717 22:03:31.183215   34695 main.go:141] libmachine: (multinode-009530) DBG | exit 0
	I0717 22:03:31.277595   34695 main.go:141] libmachine: (multinode-009530) DBG | SSH cmd err, output: <nil>: 
	I0717 22:03:31.277906   34695 main.go:141] libmachine: (multinode-009530) KVM machine creation complete!
	I0717 22:03:31.278171   34695 main.go:141] libmachine: (multinode-009530) Calling .GetConfigRaw
	I0717 22:03:31.278762   34695 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:03:31.278979   34695 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:03:31.279152   34695 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 22:03:31.279164   34695 main.go:141] libmachine: (multinode-009530) Calling .GetState
	I0717 22:03:31.280424   34695 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 22:03:31.280444   34695 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 22:03:31.280453   34695 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 22:03:31.280462   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:03:31.282770   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.283167   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:31.283207   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.283398   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:03:31.283545   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:31.283730   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:31.283844   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:03:31.284019   34695 main.go:141] libmachine: Using SSH client type: native
	I0717 22:03:31.284464   34695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0717 22:03:31.284479   34695 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 22:03:31.409076   34695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:03:31.409096   34695 main.go:141] libmachine: Detecting the provisioner...
	I0717 22:03:31.409103   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:03:31.412300   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.412710   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:31.412743   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.412908   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:03:31.413141   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:31.413304   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:31.413471   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:03:31.413622   34695 main.go:141] libmachine: Using SSH client type: native
	I0717 22:03:31.414039   34695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0717 22:03:31.414054   34695 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 22:03:31.538333   34695 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0717 22:03:31.538425   34695 main.go:141] libmachine: found compatible host: buildroot
	I0717 22:03:31.538443   34695 main.go:141] libmachine: Provisioning with buildroot...
	I0717 22:03:31.538459   34695 main.go:141] libmachine: (multinode-009530) Calling .GetMachineName
	I0717 22:03:31.538723   34695 buildroot.go:166] provisioning hostname "multinode-009530"
	I0717 22:03:31.538743   34695 main.go:141] libmachine: (multinode-009530) Calling .GetMachineName
	I0717 22:03:31.538915   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:03:31.541613   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.542057   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:31.542085   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.542273   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:03:31.542455   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:31.542606   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:31.542774   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:03:31.542955   34695 main.go:141] libmachine: Using SSH client type: native
	I0717 22:03:31.543346   34695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0717 22:03:31.543360   34695 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-009530 && echo "multinode-009530" | sudo tee /etc/hostname
	I0717 22:03:31.679160   34695 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-009530
	
	I0717 22:03:31.679218   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:03:31.682307   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.682761   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:31.682810   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.683015   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:03:31.683212   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:31.683372   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:31.683519   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:03:31.683755   34695 main.go:141] libmachine: Using SSH client type: native
	I0717 22:03:31.684174   34695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0717 22:03:31.684192   34695 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-009530' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-009530/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-009530' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:03:31.818026   34695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:03:31.818053   34695 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:03:31.818069   34695 buildroot.go:174] setting up certificates
	I0717 22:03:31.818076   34695 provision.go:83] configureAuth start
	I0717 22:03:31.818084   34695 main.go:141] libmachine: (multinode-009530) Calling .GetMachineName
	I0717 22:03:31.818351   34695 main.go:141] libmachine: (multinode-009530) Calling .GetIP
	I0717 22:03:31.821347   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.821806   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:31.821846   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.822011   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:03:31.824462   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.824856   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:31.824900   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.825066   34695 provision.go:138] copyHostCerts
	I0717 22:03:31.825096   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:03:31.825123   34695 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:03:31.825129   34695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:03:31.825182   34695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:03:31.825292   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:03:31.825315   34695 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:03:31.825319   34695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:03:31.825342   34695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:03:31.825394   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:03:31.825416   34695 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:03:31.825419   34695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:03:31.825436   34695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:03:31.825500   34695 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.multinode-009530 san=[192.168.39.222 192.168.39.222 localhost 127.0.0.1 minikube multinode-009530]
	I0717 22:03:31.958768   34695 provision.go:172] copyRemoteCerts
	I0717 22:03:31.958838   34695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:03:31.958861   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:03:31.962045   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.962546   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:31.962590   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:31.962788   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:03:31.963022   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:31.963252   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:03:31.963509   34695 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:03:32.055389   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 22:03:32.055459   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:03:32.079949   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 22:03:32.080041   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 22:03:32.103779   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 22:03:32.103851   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:03:32.127317   34695 provision.go:86] duration metric: configureAuth took 309.227273ms
	I0717 22:03:32.127344   34695 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:03:32.127535   34695 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:03:32.127628   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:03:32.130172   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.130682   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:32.130716   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.130854   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:03:32.131064   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:32.131244   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:32.131384   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:03:32.131559   34695 main.go:141] libmachine: Using SSH client type: native
	I0717 22:03:32.131966   34695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0717 22:03:32.131987   34695 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:03:32.450499   34695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:03:32.450524   34695 main.go:141] libmachine: Checking connection to Docker...
	I0717 22:03:32.450536   34695 main.go:141] libmachine: (multinode-009530) Calling .GetURL
	I0717 22:03:32.451708   34695 main.go:141] libmachine: (multinode-009530) DBG | Using libvirt version 6000000
	I0717 22:03:32.454178   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.454619   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:32.454650   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.454870   34695 main.go:141] libmachine: Docker is up and running!
	I0717 22:03:32.454882   34695 main.go:141] libmachine: Reticulating splines...
	I0717 22:03:32.454887   34695 client.go:171] LocalClient.Create took 24.492290381s
	I0717 22:03:32.454910   34695 start.go:167] duration metric: libmachine.API.Create for "multinode-009530" took 24.492370338s
	I0717 22:03:32.454924   34695 start.go:300] post-start starting for "multinode-009530" (driver="kvm2")
	I0717 22:03:32.454935   34695 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:03:32.454957   34695 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:03:32.455251   34695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:03:32.455279   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:03:32.457284   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.457658   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:32.457693   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.457867   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:03:32.458065   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:32.458225   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:03:32.458347   34695 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:03:32.550883   34695 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:03:32.555334   34695 command_runner.go:130] > NAME=Buildroot
	I0717 22:03:32.555352   34695 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0717 22:03:32.555356   34695 command_runner.go:130] > ID=buildroot
	I0717 22:03:32.555362   34695 command_runner.go:130] > VERSION_ID=2021.02.12
	I0717 22:03:32.555378   34695 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0717 22:03:32.555541   34695 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:03:32.555564   34695 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:03:32.555631   34695 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:03:32.555746   34695 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:03:32.555761   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> /etc/ssl/certs/229902.pem
	I0717 22:03:32.555883   34695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:03:32.564388   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:03:32.590008   34695 start.go:303] post-start completed in 135.070402ms
	I0717 22:03:32.590065   34695 main.go:141] libmachine: (multinode-009530) Calling .GetConfigRaw
	I0717 22:03:32.590597   34695 main.go:141] libmachine: (multinode-009530) Calling .GetIP
	I0717 22:03:32.593328   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.593775   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:32.593797   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.594027   34695 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/config.json ...
	I0717 22:03:32.594216   34695 start.go:128] duration metric: createHost completed in 24.6488333s
	I0717 22:03:32.594246   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:03:32.596871   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.597235   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:32.597268   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.597413   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:03:32.597656   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:32.597878   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:32.598029   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:03:32.598215   34695 main.go:141] libmachine: Using SSH client type: native
	I0717 22:03:32.598708   34695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0717 22:03:32.598726   34695 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:03:32.722935   34695 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689631412.706444734
	
	I0717 22:03:32.722962   34695 fix.go:206] guest clock: 1689631412.706444734
	I0717 22:03:32.722989   34695 fix.go:219] Guest: 2023-07-17 22:03:32.706444734 +0000 UTC Remote: 2023-07-17 22:03:32.594232956 +0000 UTC m=+24.755298361 (delta=112.211778ms)
	I0717 22:03:32.723017   34695 fix.go:190] guest clock delta is within tolerance: 112.211778ms
	I0717 22:03:32.723029   34695 start.go:83] releasing machines lock for "multinode-009530", held for 24.777740844s
	I0717 22:03:32.723060   34695 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:03:32.723363   34695 main.go:141] libmachine: (multinode-009530) Calling .GetIP
	I0717 22:03:32.726030   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.726435   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:32.726462   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.726663   34695 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:03:32.727181   34695 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:03:32.727389   34695 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:03:32.727499   34695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:03:32.727561   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:03:32.727588   34695 ssh_runner.go:195] Run: cat /version.json
	I0717 22:03:32.727609   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:03:32.730082   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.730312   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.730391   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:32.730419   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.730524   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:03:32.730700   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:32.730753   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:32.730784   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:32.730895   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:03:32.730959   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:03:32.731060   34695 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:03:32.731154   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:03:32.731307   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:03:32.731441   34695 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:03:32.848601   34695 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 22:03:32.848684   34695 command_runner.go:130] > {"iso_version": "v1.31.0", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "be0194f682c2c37366eacb8c13503cb6c7a41cf8"}
	I0717 22:03:32.848797   34695 ssh_runner.go:195] Run: systemctl --version
	I0717 22:03:32.854926   34695 command_runner.go:130] > systemd 247 (247)
	I0717 22:03:32.854959   34695 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0717 22:03:32.855018   34695 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:03:33.014608   34695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:03:33.021559   34695 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 22:03:33.021778   34695 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:03:33.021851   34695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:03:33.036965   34695 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0717 22:03:33.036994   34695 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:03:33.037002   34695 start.go:466] detecting cgroup driver to use...
	I0717 22:03:33.037076   34695 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:03:33.055566   34695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:03:33.070522   34695 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:03:33.070606   34695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:03:33.086734   34695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:03:33.100741   34695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:03:33.115521   34695 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0717 22:03:33.217263   34695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:03:33.230814   34695 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0717 22:03:33.330990   34695 docker.go:212] disabling docker service ...
	I0717 22:03:33.331048   34695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:03:33.345389   34695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:03:33.356854   34695 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0717 22:03:33.356969   34695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:03:33.460701   34695 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0717 22:03:33.460763   34695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:03:33.474720   34695 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0717 22:03:33.475058   34695 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0717 22:03:33.567982   34695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:03:33.580261   34695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:03:33.597401   34695 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 22:03:33.597814   34695 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:03:33.597897   34695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:03:33.607267   34695 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:03:33.607337   34695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:03:33.617661   34695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:03:33.628210   34695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:03:33.638535   34695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:03:33.649097   34695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:03:33.658122   34695 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:03:33.658155   34695 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:03:33.658195   34695 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:03:33.671972   34695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:03:33.681564   34695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:03:33.787807   34695 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:03:33.950799   34695 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:03:33.950854   34695 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:03:33.956313   34695 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 22:03:33.956338   34695 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 22:03:33.956347   34695 command_runner.go:130] > Device: 16h/22d	Inode: 719         Links: 1
	I0717 22:03:33.956357   34695 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:03:33.956367   34695 command_runner.go:130] > Access: 2023-07-17 22:03:33.921333088 +0000
	I0717 22:03:33.956384   34695 command_runner.go:130] > Modify: 2023-07-17 22:03:33.921333088 +0000
	I0717 22:03:33.956397   34695 command_runner.go:130] > Change: 2023-07-17 22:03:33.921333088 +0000
	I0717 22:03:33.956410   34695 command_runner.go:130] >  Birth: -
	I0717 22:03:33.956428   34695 start.go:534] Will wait 60s for crictl version
	I0717 22:03:33.956471   34695 ssh_runner.go:195] Run: which crictl
	I0717 22:03:33.960082   34695 command_runner.go:130] > /usr/bin/crictl
	I0717 22:03:33.960150   34695 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:03:33.995758   34695 command_runner.go:130] > Version:  0.1.0
	I0717 22:03:33.995779   34695 command_runner.go:130] > RuntimeName:  cri-o
	I0717 22:03:33.995786   34695 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0717 22:03:33.995794   34695 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0717 22:03:33.995854   34695 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:03:33.995923   34695 ssh_runner.go:195] Run: crio --version
	I0717 22:03:34.041209   34695 command_runner.go:130] > crio version 1.24.1
	I0717 22:03:34.041232   34695 command_runner.go:130] > Version:          1.24.1
	I0717 22:03:34.041242   34695 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 22:03:34.041248   34695 command_runner.go:130] > GitTreeState:     dirty
	I0717 22:03:34.041256   34695 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 22:03:34.041263   34695 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 22:03:34.041269   34695 command_runner.go:130] > Compiler:         gc
	I0717 22:03:34.041275   34695 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:03:34.041281   34695 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:03:34.041292   34695 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:03:34.041303   34695 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:03:34.041311   34695 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:03:34.041385   34695 ssh_runner.go:195] Run: crio --version
	I0717 22:03:34.086927   34695 command_runner.go:130] > crio version 1.24.1
	I0717 22:03:34.086953   34695 command_runner.go:130] > Version:          1.24.1
	I0717 22:03:34.086967   34695 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 22:03:34.086980   34695 command_runner.go:130] > GitTreeState:     dirty
	I0717 22:03:34.086989   34695 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 22:03:34.086996   34695 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 22:03:34.087002   34695 command_runner.go:130] > Compiler:         gc
	I0717 22:03:34.087009   34695 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:03:34.087018   34695 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:03:34.087029   34695 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:03:34.087035   34695 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:03:34.087042   34695 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:03:34.089494   34695 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:03:34.091334   34695 main.go:141] libmachine: (multinode-009530) Calling .GetIP
	I0717 22:03:34.094183   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:34.094543   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:03:34.094566   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:03:34.094785   34695 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 22:03:34.098988   34695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:03:34.113217   34695 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:03:34.113263   34695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:03:34.138819   34695 command_runner.go:130] > {
	I0717 22:03:34.138845   34695 command_runner.go:130] >   "images": [
	I0717 22:03:34.138850   34695 command_runner.go:130] >   ]
	I0717 22:03:34.138855   34695 command_runner.go:130] > }
	I0717 22:03:34.138973   34695 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:03:34.139038   34695 ssh_runner.go:195] Run: which lz4
	I0717 22:03:34.142912   34695 command_runner.go:130] > /usr/bin/lz4
	I0717 22:03:34.142938   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 22:03:34.143047   34695 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:03:34.147339   34695 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:03:34.147372   34695 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:03:34.147390   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 22:03:35.875764   34695 crio.go:444] Took 1.732757 seconds to copy over tarball
	I0717 22:03:35.875846   34695 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:03:38.532747   34695 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.656869986s)
	I0717 22:03:38.532773   34695 crio.go:451] Took 2.656977 seconds to extract the tarball
	I0717 22:03:38.532785   34695 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:03:38.572679   34695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:03:38.637836   34695 command_runner.go:130] > {
	I0717 22:03:38.637860   34695 command_runner.go:130] >   "images": [
	I0717 22:03:38.637867   34695 command_runner.go:130] >     {
	I0717 22:03:38.637880   34695 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0717 22:03:38.637886   34695 command_runner.go:130] >       "repoTags": [
	I0717 22:03:38.637896   34695 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0717 22:03:38.637905   34695 command_runner.go:130] >       ],
	I0717 22:03:38.637913   34695 command_runner.go:130] >       "repoDigests": [
	I0717 22:03:38.637931   34695 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0717 22:03:38.637948   34695 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0717 22:03:38.637957   34695 command_runner.go:130] >       ],
	I0717 22:03:38.637966   34695 command_runner.go:130] >       "size": "65249302",
	I0717 22:03:38.637976   34695 command_runner.go:130] >       "uid": null,
	I0717 22:03:38.637987   34695 command_runner.go:130] >       "username": "",
	I0717 22:03:38.637999   34695 command_runner.go:130] >       "spec": null
	I0717 22:03:38.638009   34695 command_runner.go:130] >     },
	I0717 22:03:38.638015   34695 command_runner.go:130] >     {
	I0717 22:03:38.638030   34695 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 22:03:38.638040   34695 command_runner.go:130] >       "repoTags": [
	I0717 22:03:38.638051   34695 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 22:03:38.638060   34695 command_runner.go:130] >       ],
	I0717 22:03:38.638069   34695 command_runner.go:130] >       "repoDigests": [
	I0717 22:03:38.638087   34695 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 22:03:38.638104   34695 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 22:03:38.638113   34695 command_runner.go:130] >       ],
	I0717 22:03:38.638123   34695 command_runner.go:130] >       "size": "31470524",
	I0717 22:03:38.638134   34695 command_runner.go:130] >       "uid": null,
	I0717 22:03:38.638145   34695 command_runner.go:130] >       "username": "",
	I0717 22:03:38.638155   34695 command_runner.go:130] >       "spec": null
	I0717 22:03:38.638164   34695 command_runner.go:130] >     },
	I0717 22:03:38.638171   34695 command_runner.go:130] >     {
	I0717 22:03:38.638185   34695 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0717 22:03:38.638203   34695 command_runner.go:130] >       "repoTags": [
	I0717 22:03:38.638216   34695 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0717 22:03:38.638225   34695 command_runner.go:130] >       ],
	I0717 22:03:38.638236   34695 command_runner.go:130] >       "repoDigests": [
	I0717 22:03:38.638251   34695 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0717 22:03:38.638267   34695 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0717 22:03:38.638276   34695 command_runner.go:130] >       ],
	I0717 22:03:38.638285   34695 command_runner.go:130] >       "size": "53621675",
	I0717 22:03:38.638295   34695 command_runner.go:130] >       "uid": null,
	I0717 22:03:38.638302   34695 command_runner.go:130] >       "username": "",
	I0717 22:03:38.638312   34695 command_runner.go:130] >       "spec": null
	I0717 22:03:38.638322   34695 command_runner.go:130] >     },
	I0717 22:03:38.638328   34695 command_runner.go:130] >     {
	I0717 22:03:38.638343   34695 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0717 22:03:38.638353   34695 command_runner.go:130] >       "repoTags": [
	I0717 22:03:38.638367   34695 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0717 22:03:38.638387   34695 command_runner.go:130] >       ],
	I0717 22:03:38.638398   34695 command_runner.go:130] >       "repoDigests": [
	I0717 22:03:38.638414   34695 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0717 22:03:38.638429   34695 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0717 22:03:38.638439   34695 command_runner.go:130] >       ],
	I0717 22:03:38.638448   34695 command_runner.go:130] >       "size": "297083935",
	I0717 22:03:38.638458   34695 command_runner.go:130] >       "uid": {
	I0717 22:03:38.638468   34695 command_runner.go:130] >         "value": "0"
	I0717 22:03:38.638481   34695 command_runner.go:130] >       },
	I0717 22:03:38.638491   34695 command_runner.go:130] >       "username": "",
	I0717 22:03:38.638501   34695 command_runner.go:130] >       "spec": null
	I0717 22:03:38.638507   34695 command_runner.go:130] >     },
	I0717 22:03:38.638516   34695 command_runner.go:130] >     {
	I0717 22:03:38.638528   34695 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0717 22:03:38.638538   34695 command_runner.go:130] >       "repoTags": [
	I0717 22:03:38.638550   34695 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0717 22:03:38.638559   34695 command_runner.go:130] >       ],
	I0717 22:03:38.638567   34695 command_runner.go:130] >       "repoDigests": [
	I0717 22:03:38.638587   34695 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0717 22:03:38.638603   34695 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0717 22:03:38.638613   34695 command_runner.go:130] >       ],
	I0717 22:03:38.638624   34695 command_runner.go:130] >       "size": "122065872",
	I0717 22:03:38.638634   34695 command_runner.go:130] >       "uid": {
	I0717 22:03:38.638643   34695 command_runner.go:130] >         "value": "0"
	I0717 22:03:38.638653   34695 command_runner.go:130] >       },
	I0717 22:03:38.638662   34695 command_runner.go:130] >       "username": "",
	I0717 22:03:38.638672   34695 command_runner.go:130] >       "spec": null
	I0717 22:03:38.638681   34695 command_runner.go:130] >     },
	I0717 22:03:38.638688   34695 command_runner.go:130] >     {
	I0717 22:03:38.638702   34695 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0717 22:03:38.638711   34695 command_runner.go:130] >       "repoTags": [
	I0717 22:03:38.638721   34695 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0717 22:03:38.638731   34695 command_runner.go:130] >       ],
	I0717 22:03:38.638738   34695 command_runner.go:130] >       "repoDigests": [
	I0717 22:03:38.638755   34695 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0717 22:03:38.638772   34695 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0717 22:03:38.638785   34695 command_runner.go:130] >       ],
	I0717 22:03:38.638796   34695 command_runner.go:130] >       "size": "113919286",
	I0717 22:03:38.638806   34695 command_runner.go:130] >       "uid": {
	I0717 22:03:38.638814   34695 command_runner.go:130] >         "value": "0"
	I0717 22:03:38.638824   34695 command_runner.go:130] >       },
	I0717 22:03:38.638833   34695 command_runner.go:130] >       "username": "",
	I0717 22:03:38.638843   34695 command_runner.go:130] >       "spec": null
	I0717 22:03:38.638853   34695 command_runner.go:130] >     },
	I0717 22:03:38.638862   34695 command_runner.go:130] >     {
	I0717 22:03:38.638874   34695 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0717 22:03:38.638884   34695 command_runner.go:130] >       "repoTags": [
	I0717 22:03:38.638896   34695 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0717 22:03:38.638905   34695 command_runner.go:130] >       ],
	I0717 22:03:38.638913   34695 command_runner.go:130] >       "repoDigests": [
	I0717 22:03:38.638929   34695 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0717 22:03:38.638945   34695 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0717 22:03:38.638954   34695 command_runner.go:130] >       ],
	I0717 22:03:38.638963   34695 command_runner.go:130] >       "size": "72713623",
	I0717 22:03:38.638973   34695 command_runner.go:130] >       "uid": null,
	I0717 22:03:38.638984   34695 command_runner.go:130] >       "username": "",
	I0717 22:03:38.638991   34695 command_runner.go:130] >       "spec": null
	I0717 22:03:38.639000   34695 command_runner.go:130] >     },
	I0717 22:03:38.639006   34695 command_runner.go:130] >     {
	I0717 22:03:38.639018   34695 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0717 22:03:38.639029   34695 command_runner.go:130] >       "repoTags": [
	I0717 22:03:38.639041   34695 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0717 22:03:38.639051   34695 command_runner.go:130] >       ],
	I0717 22:03:38.639060   34695 command_runner.go:130] >       "repoDigests": [
	I0717 22:03:38.639076   34695 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0717 22:03:38.639107   34695 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0717 22:03:38.639117   34695 command_runner.go:130] >       ],
	I0717 22:03:38.639124   34695 command_runner.go:130] >       "size": "59811126",
	I0717 22:03:38.639130   34695 command_runner.go:130] >       "uid": {
	I0717 22:03:38.639137   34695 command_runner.go:130] >         "value": "0"
	I0717 22:03:38.639145   34695 command_runner.go:130] >       },
	I0717 22:03:38.639153   34695 command_runner.go:130] >       "username": "",
	I0717 22:03:38.639165   34695 command_runner.go:130] >       "spec": null
	I0717 22:03:38.639172   34695 command_runner.go:130] >     },
	I0717 22:03:38.639179   34695 command_runner.go:130] >     {
	I0717 22:03:38.639198   34695 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 22:03:38.639208   34695 command_runner.go:130] >       "repoTags": [
	I0717 22:03:38.639220   34695 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 22:03:38.639230   34695 command_runner.go:130] >       ],
	I0717 22:03:38.639239   34695 command_runner.go:130] >       "repoDigests": [
	I0717 22:03:38.639252   34695 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 22:03:38.639268   34695 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 22:03:38.639277   34695 command_runner.go:130] >       ],
	I0717 22:03:38.639287   34695 command_runner.go:130] >       "size": "750414",
	I0717 22:03:38.639296   34695 command_runner.go:130] >       "uid": {
	I0717 22:03:38.639307   34695 command_runner.go:130] >         "value": "65535"
	I0717 22:03:38.639314   34695 command_runner.go:130] >       },
	I0717 22:03:38.639323   34695 command_runner.go:130] >       "username": "",
	I0717 22:03:38.639333   34695 command_runner.go:130] >       "spec": null
	I0717 22:03:38.639340   34695 command_runner.go:130] >     }
	I0717 22:03:38.639348   34695 command_runner.go:130] >   ]
	I0717 22:03:38.639355   34695 command_runner.go:130] > }
	I0717 22:03:38.639481   34695 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:03:38.639492   34695 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:03:38.639590   34695 ssh_runner.go:195] Run: crio config
	I0717 22:03:38.687807   34695 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 22:03:38.687837   34695 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 22:03:38.687846   34695 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 22:03:38.687853   34695 command_runner.go:130] > #
	I0717 22:03:38.687863   34695 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 22:03:38.687873   34695 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 22:03:38.687881   34695 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 22:03:38.687893   34695 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 22:03:38.687899   34695 command_runner.go:130] > # reload'.
	I0717 22:03:38.687908   34695 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 22:03:38.687919   34695 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 22:03:38.687930   34695 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 22:03:38.687944   34695 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 22:03:38.687954   34695 command_runner.go:130] > [crio]
	I0717 22:03:38.687973   34695 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 22:03:38.687983   34695 command_runner.go:130] > # containers images, in this directory.
	I0717 22:03:38.687991   34695 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 22:03:38.688011   34695 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 22:03:38.688022   34695 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 22:03:38.688034   34695 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 22:03:38.688048   34695 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 22:03:38.688057   34695 command_runner.go:130] > storage_driver = "overlay"
	I0717 22:03:38.688068   34695 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 22:03:38.688081   34695 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 22:03:38.688093   34695 command_runner.go:130] > storage_option = [
	I0717 22:03:38.688105   34695 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 22:03:38.688114   34695 command_runner.go:130] > ]
	I0717 22:03:38.688126   34695 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 22:03:38.688140   34695 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 22:03:38.688151   34695 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 22:03:38.688165   34695 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 22:03:38.688179   34695 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 22:03:38.688189   34695 command_runner.go:130] > # always happen on a node reboot
	I0717 22:03:38.688198   34695 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 22:03:38.688212   34695 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 22:03:38.688226   34695 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 22:03:38.688242   34695 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 22:03:38.688254   34695 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 22:03:38.688269   34695 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 22:03:38.688286   34695 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 22:03:38.688296   34695 command_runner.go:130] > # internal_wipe = true
	I0717 22:03:38.688309   34695 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 22:03:38.688324   34695 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 22:03:38.688350   34695 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 22:03:38.688360   34695 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 22:03:38.688374   34695 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 22:03:38.688384   34695 command_runner.go:130] > [crio.api]
	I0717 22:03:38.688395   34695 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 22:03:38.688406   34695 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 22:03:38.688421   34695 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 22:03:38.688432   34695 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 22:03:38.688446   34695 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 22:03:38.688458   34695 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 22:03:38.688468   34695 command_runner.go:130] > # stream_port = "0"
	I0717 22:03:38.688481   34695 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 22:03:38.688491   34695 command_runner.go:130] > # stream_enable_tls = false
	I0717 22:03:38.688502   34695 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 22:03:38.688541   34695 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 22:03:38.688555   34695 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 22:03:38.688566   34695 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 22:03:38.688577   34695 command_runner.go:130] > # minutes.
	I0717 22:03:38.688588   34695 command_runner.go:130] > # stream_tls_cert = ""
	I0717 22:03:38.688599   34695 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 22:03:38.688614   34695 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 22:03:38.688625   34695 command_runner.go:130] > # stream_tls_key = ""
	I0717 22:03:38.688636   34695 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 22:03:38.688650   34695 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 22:03:38.688663   34695 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 22:03:38.688673   34695 command_runner.go:130] > # stream_tls_ca = ""
	I0717 22:03:38.688687   34695 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:03:38.688703   34695 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 22:03:38.688744   34695 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:03:38.688761   34695 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 22:03:38.688783   34695 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 22:03:38.688797   34695 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 22:03:38.688807   34695 command_runner.go:130] > [crio.runtime]
	I0717 22:03:38.688821   34695 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 22:03:38.688834   34695 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 22:03:38.688845   34695 command_runner.go:130] > # "nofile=1024:2048"
	I0717 22:03:38.688856   34695 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 22:03:38.688867   34695 command_runner.go:130] > # default_ulimits = [
	I0717 22:03:38.688875   34695 command_runner.go:130] > # ]
	I0717 22:03:38.688887   34695 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 22:03:38.688898   34695 command_runner.go:130] > # no_pivot = false
	I0717 22:03:38.688908   34695 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 22:03:38.688924   34695 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 22:03:38.688935   34695 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 22:03:38.688949   34695 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 22:03:38.688961   34695 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 22:03:38.688976   34695 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:03:38.688987   34695 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 22:03:38.689001   34695 command_runner.go:130] > # Cgroup setting for conmon
	I0717 22:03:38.689016   34695 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 22:03:38.689027   34695 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 22:03:38.689041   34695 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 22:03:38.689053   34695 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 22:03:38.689069   34695 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:03:38.689078   34695 command_runner.go:130] > conmon_env = [
	I0717 22:03:38.689092   34695 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 22:03:38.689101   34695 command_runner.go:130] > ]
	I0717 22:03:38.689111   34695 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 22:03:38.689124   34695 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 22:03:38.689138   34695 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 22:03:38.689147   34695 command_runner.go:130] > # default_env = [
	I0717 22:03:38.689152   34695 command_runner.go:130] > # ]
	I0717 22:03:38.689162   34695 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 22:03:38.689172   34695 command_runner.go:130] > # selinux = false
	I0717 22:03:38.689186   34695 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 22:03:38.689198   34695 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 22:03:38.689211   34695 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 22:03:38.689220   34695 command_runner.go:130] > # seccomp_profile = ""
	I0717 22:03:38.689233   34695 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 22:03:38.689247   34695 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 22:03:38.689261   34695 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 22:03:38.689273   34695 command_runner.go:130] > # which might increase security.
	I0717 22:03:38.689284   34695 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 22:03:38.689299   34695 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 22:03:38.689313   34695 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 22:03:38.689327   34695 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 22:03:38.689341   34695 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 22:03:38.689354   34695 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:03:38.689367   34695 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 22:03:38.689385   34695 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 22:03:38.689396   34695 command_runner.go:130] > # the cgroup blockio controller.
	I0717 22:03:38.689404   34695 command_runner.go:130] > # blockio_config_file = ""
	I0717 22:03:38.689419   34695 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 22:03:38.689429   34695 command_runner.go:130] > # irqbalance daemon.
	I0717 22:03:38.689442   34695 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 22:03:38.689457   34695 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 22:03:38.689469   34695 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:03:38.689480   34695 command_runner.go:130] > # rdt_config_file = ""
	I0717 22:03:38.689490   34695 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 22:03:38.689501   34695 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 22:03:38.689513   34695 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 22:03:38.689564   34695 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 22:03:38.689578   34695 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 22:03:38.689593   34695 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 22:03:38.689603   34695 command_runner.go:130] > # will be added.
	I0717 22:03:38.689614   34695 command_runner.go:130] > # default_capabilities = [
	I0717 22:03:38.689621   34695 command_runner.go:130] > # 	"CHOWN",
	I0717 22:03:38.689631   34695 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 22:03:38.689640   34695 command_runner.go:130] > # 	"FSETID",
	I0717 22:03:38.689648   34695 command_runner.go:130] > # 	"FOWNER",
	I0717 22:03:38.689658   34695 command_runner.go:130] > # 	"SETGID",
	I0717 22:03:38.689667   34695 command_runner.go:130] > # 	"SETUID",
	I0717 22:03:38.689675   34695 command_runner.go:130] > # 	"SETPCAP",
	I0717 22:03:38.689685   34695 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 22:03:38.689706   34695 command_runner.go:130] > # 	"KILL",
	I0717 22:03:38.689714   34695 command_runner.go:130] > # ]
	I0717 22:03:38.689726   34695 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 22:03:38.689740   34695 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:03:38.689751   34695 command_runner.go:130] > # default_sysctls = [
	I0717 22:03:38.689758   34695 command_runner.go:130] > # ]
	I0717 22:03:38.689770   34695 command_runner.go:130] > # List of devices on the host that a
	I0717 22:03:38.689784   34695 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 22:03:38.689795   34695 command_runner.go:130] > # allowed_devices = [
	I0717 22:03:38.689805   34695 command_runner.go:130] > # 	"/dev/fuse",
	I0717 22:03:38.689813   34695 command_runner.go:130] > # ]
	I0717 22:03:38.689823   34695 command_runner.go:130] > # List of additional devices. specified as
	I0717 22:03:38.689839   34695 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 22:03:38.689851   34695 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 22:03:38.689875   34695 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:03:38.689886   34695 command_runner.go:130] > # additional_devices = [
	I0717 22:03:38.689895   34695 command_runner.go:130] > # ]
	I0717 22:03:38.689905   34695 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 22:03:38.689914   34695 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 22:03:38.689921   34695 command_runner.go:130] > # 	"/etc/cdi",
	I0717 22:03:38.689931   34695 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 22:03:38.689940   34695 command_runner.go:130] > # ]
	I0717 22:03:38.689952   34695 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 22:03:38.689966   34695 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 22:03:38.689976   34695 command_runner.go:130] > # Defaults to false.
	I0717 22:03:38.689988   34695 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 22:03:38.690003   34695 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 22:03:38.690017   34695 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 22:03:38.690027   34695 command_runner.go:130] > # hooks_dir = [
	I0717 22:03:38.690038   34695 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 22:03:38.690046   34695 command_runner.go:130] > # ]
	I0717 22:03:38.690057   34695 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 22:03:38.690072   34695 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 22:03:38.690085   34695 command_runner.go:130] > # its default mounts from the following two files:
	I0717 22:03:38.690093   34695 command_runner.go:130] > #
	I0717 22:03:38.690105   34695 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 22:03:38.690118   34695 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 22:03:38.690128   34695 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 22:03:38.690137   34695 command_runner.go:130] > #
	I0717 22:03:38.690149   34695 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 22:03:38.690163   34695 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 22:03:38.690177   34695 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 22:03:38.690189   34695 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 22:03:38.690198   34695 command_runner.go:130] > #
	I0717 22:03:38.690205   34695 command_runner.go:130] > # default_mounts_file = ""
	I0717 22:03:38.690218   34695 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 22:03:38.690234   34695 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 22:03:38.690243   34695 command_runner.go:130] > pids_limit = 1024
	I0717 22:03:38.690255   34695 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 22:03:38.690269   34695 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 22:03:38.690283   34695 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 22:03:38.690300   34695 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 22:03:38.690310   34695 command_runner.go:130] > # log_size_max = -1
	I0717 22:03:38.690326   34695 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 22:03:38.690336   34695 command_runner.go:130] > # log_to_journald = false
	I0717 22:03:38.690348   34695 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 22:03:38.690359   34695 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 22:03:38.690369   34695 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 22:03:38.690381   34695 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 22:03:38.690394   34695 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 22:03:38.690404   34695 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 22:03:38.690415   34695 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 22:03:38.690425   34695 command_runner.go:130] > # read_only = false
	I0717 22:03:38.690439   34695 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 22:03:38.690454   34695 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 22:03:38.690464   34695 command_runner.go:130] > # live configuration reload.
	I0717 22:03:38.690472   34695 command_runner.go:130] > # log_level = "info"
	I0717 22:03:38.690485   34695 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 22:03:38.690497   34695 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:03:38.690507   34695 command_runner.go:130] > # log_filter = ""
	I0717 22:03:38.690517   34695 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 22:03:38.690530   34695 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 22:03:38.690536   34695 command_runner.go:130] > # separated by comma.
	I0717 22:03:38.690542   34695 command_runner.go:130] > # uid_mappings = ""
	I0717 22:03:38.690551   34695 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 22:03:38.690565   34695 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 22:03:38.690573   34695 command_runner.go:130] > # separated by comma.
	I0717 22:03:38.690580   34695 command_runner.go:130] > # gid_mappings = ""
	I0717 22:03:38.690590   34695 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 22:03:38.690626   34695 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:03:38.690646   34695 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:03:38.690657   34695 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 22:03:38.690668   34695 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 22:03:38.690681   34695 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:03:38.690698   34695 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:03:38.690707   34695 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 22:03:38.690721   34695 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 22:03:38.690734   34695 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 22:03:38.690752   34695 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 22:03:38.690762   34695 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 22:03:38.690775   34695 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 22:03:38.690788   34695 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 22:03:38.690799   34695 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 22:03:38.690810   34695 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 22:03:38.690820   34695 command_runner.go:130] > drop_infra_ctr = false
	I0717 22:03:38.690830   34695 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 22:03:38.690838   34695 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 22:03:38.690845   34695 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 22:03:38.690851   34695 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 22:03:38.690858   34695 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 22:03:38.690867   34695 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 22:03:38.690873   34695 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 22:03:38.690880   34695 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 22:03:38.690887   34695 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 22:03:38.690893   34695 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 22:03:38.690901   34695 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 22:03:38.690909   34695 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 22:03:38.690915   34695 command_runner.go:130] > # default_runtime = "runc"
	I0717 22:03:38.690921   34695 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 22:03:38.690930   34695 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 22:03:38.690941   34695 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 22:03:38.690949   34695 command_runner.go:130] > # creation as a file is not desired either.
	I0717 22:03:38.690956   34695 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 22:03:38.690964   34695 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 22:03:38.690972   34695 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 22:03:38.690978   34695 command_runner.go:130] > # ]
	I0717 22:03:38.690984   34695 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 22:03:38.690992   34695 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 22:03:38.690999   34695 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 22:03:38.691008   34695 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 22:03:38.691014   34695 command_runner.go:130] > #
	I0717 22:03:38.691019   34695 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 22:03:38.691026   34695 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 22:03:38.691030   34695 command_runner.go:130] > #  runtime_type = "oci"
	I0717 22:03:38.691037   34695 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 22:03:38.691042   34695 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 22:03:38.691048   34695 command_runner.go:130] > #  allowed_annotations = []
	I0717 22:03:38.691052   34695 command_runner.go:130] > # Where:
	I0717 22:03:38.691058   34695 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 22:03:38.691066   34695 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 22:03:38.691074   34695 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 22:03:38.691082   34695 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 22:03:38.691086   34695 command_runner.go:130] > #   in $PATH.
	I0717 22:03:38.691094   34695 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 22:03:38.691101   34695 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 22:03:38.691109   34695 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 22:03:38.691116   34695 command_runner.go:130] > #   state.
	I0717 22:03:38.691122   34695 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 22:03:38.691131   34695 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 22:03:38.691139   34695 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 22:03:38.691144   34695 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 22:03:38.691152   34695 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 22:03:38.691161   34695 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 22:03:38.691166   34695 command_runner.go:130] > #   The currently recognized values are:
	I0717 22:03:38.691172   34695 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 22:03:38.691181   34695 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 22:03:38.691209   34695 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 22:03:38.691222   34695 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 22:03:38.691235   34695 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 22:03:38.691244   34695 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 22:03:38.691255   34695 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 22:03:38.691263   34695 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 22:03:38.691270   34695 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 22:03:38.691275   34695 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 22:03:38.691282   34695 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 22:03:38.691287   34695 command_runner.go:130] > runtime_type = "oci"
	I0717 22:03:38.691293   34695 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 22:03:38.691297   34695 command_runner.go:130] > runtime_config_path = ""
	I0717 22:03:38.691304   34695 command_runner.go:130] > monitor_path = ""
	I0717 22:03:38.691308   34695 command_runner.go:130] > monitor_cgroup = ""
	I0717 22:03:38.691314   34695 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 22:03:38.691320   34695 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 22:03:38.691326   34695 command_runner.go:130] > # running containers
	I0717 22:03:38.691330   34695 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 22:03:38.691338   34695 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 22:03:38.691363   34695 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 22:03:38.691371   34695 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 22:03:38.691375   34695 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 22:03:38.691383   34695 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 22:03:38.691387   34695 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 22:03:38.691394   34695 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 22:03:38.691398   34695 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 22:03:38.691405   34695 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 22:03:38.691411   34695 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 22:03:38.691419   34695 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 22:03:38.691426   34695 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 22:03:38.691435   34695 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 22:03:38.691449   34695 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 22:03:38.691461   34695 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 22:03:38.691479   34695 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 22:03:38.691491   34695 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 22:03:38.691499   34695 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 22:03:38.691506   34695 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 22:03:38.691512   34695 command_runner.go:130] > # Example:
	I0717 22:03:38.691516   34695 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 22:03:38.691523   34695 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 22:03:38.691528   34695 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 22:03:38.691540   34695 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 22:03:38.691549   34695 command_runner.go:130] > # cpuset = 0
	I0717 22:03:38.691559   34695 command_runner.go:130] > # cpushares = "0-1"
	I0717 22:03:38.691569   34695 command_runner.go:130] > # Where:
	I0717 22:03:38.691577   34695 command_runner.go:130] > # The workload name is workload-type.
	I0717 22:03:38.691591   34695 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 22:03:38.691599   34695 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 22:03:38.691605   34695 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 22:03:38.691614   34695 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 22:03:38.691620   34695 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 22:03:38.691624   34695 command_runner.go:130] > # 
	I0717 22:03:38.691631   34695 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 22:03:38.691638   34695 command_runner.go:130] > #
	I0717 22:03:38.691644   34695 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 22:03:38.691651   34695 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 22:03:38.691658   34695 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 22:03:38.691666   34695 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 22:03:38.691674   34695 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 22:03:38.691680   34695 command_runner.go:130] > [crio.image]
	I0717 22:03:38.691685   34695 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 22:03:38.691696   34695 command_runner.go:130] > # default_transport = "docker://"
	I0717 22:03:38.691704   34695 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 22:03:38.691712   34695 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:03:38.691719   34695 command_runner.go:130] > # global_auth_file = ""
	I0717 22:03:38.691724   34695 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 22:03:38.691731   34695 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:03:38.691736   34695 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 22:03:38.691744   34695 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 22:03:38.691752   34695 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:03:38.691757   34695 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:03:38.691764   34695 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 22:03:38.691793   34695 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 22:03:38.691806   34695 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 22:03:38.691818   34695 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 22:03:38.691823   34695 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 22:03:38.691830   34695 command_runner.go:130] > # pause_command = "/pause"
	I0717 22:03:38.691836   34695 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 22:03:38.691848   34695 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 22:03:38.691855   34695 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 22:03:38.691861   34695 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 22:03:38.691866   34695 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 22:03:38.691870   34695 command_runner.go:130] > # signature_policy = ""
	I0717 22:03:38.691875   34695 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 22:03:38.691881   34695 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 22:03:38.691884   34695 command_runner.go:130] > # changing them here.
	I0717 22:03:38.691888   34695 command_runner.go:130] > # insecure_registries = [
	I0717 22:03:38.691892   34695 command_runner.go:130] > # ]
	I0717 22:03:38.691897   34695 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 22:03:38.691902   34695 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 22:03:38.691905   34695 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 22:03:38.691910   34695 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 22:03:38.691914   34695 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 22:03:38.691920   34695 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 22:03:38.691923   34695 command_runner.go:130] > # CNI plugins.
	I0717 22:03:38.691927   34695 command_runner.go:130] > [crio.network]
	I0717 22:03:38.691932   34695 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 22:03:38.691937   34695 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 22:03:38.691941   34695 command_runner.go:130] > # cni_default_network = ""
	I0717 22:03:38.691946   34695 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 22:03:38.691950   34695 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 22:03:38.691955   34695 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 22:03:38.691959   34695 command_runner.go:130] > # plugin_dirs = [
	I0717 22:03:38.691962   34695 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 22:03:38.691965   34695 command_runner.go:130] > # ]
	I0717 22:03:38.691970   34695 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 22:03:38.691973   34695 command_runner.go:130] > [crio.metrics]
	I0717 22:03:38.691978   34695 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 22:03:38.691981   34695 command_runner.go:130] > enable_metrics = true
	I0717 22:03:38.691986   34695 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 22:03:38.691990   34695 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 22:03:38.691995   34695 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 22:03:38.692001   34695 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 22:03:38.692006   34695 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 22:03:38.692010   34695 command_runner.go:130] > # metrics_collectors = [
	I0717 22:03:38.692013   34695 command_runner.go:130] > # 	"operations",
	I0717 22:03:38.692018   34695 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 22:03:38.692027   34695 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 22:03:38.692035   34695 command_runner.go:130] > # 	"operations_errors",
	I0717 22:03:38.692042   34695 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 22:03:38.692046   34695 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 22:03:38.692052   34695 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 22:03:38.692056   34695 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 22:03:38.692064   34695 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 22:03:38.692070   34695 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 22:03:38.692076   34695 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 22:03:38.692081   34695 command_runner.go:130] > # 	"containers_oom_total",
	I0717 22:03:38.692087   34695 command_runner.go:130] > # 	"containers_oom",
	I0717 22:03:38.692091   34695 command_runner.go:130] > # 	"processes_defunct",
	I0717 22:03:38.692097   34695 command_runner.go:130] > # 	"operations_total",
	I0717 22:03:38.692102   34695 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 22:03:38.692108   34695 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 22:03:38.692113   34695 command_runner.go:130] > # 	"operations_errors_total",
	I0717 22:03:38.692117   34695 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 22:03:38.692122   34695 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 22:03:38.692128   34695 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 22:03:38.692134   34695 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 22:03:38.692141   34695 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 22:03:38.692145   34695 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 22:03:38.692152   34695 command_runner.go:130] > # ]
	I0717 22:03:38.692157   34695 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 22:03:38.692164   34695 command_runner.go:130] > # metrics_port = 9090
	I0717 22:03:38.692169   34695 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 22:03:38.692175   34695 command_runner.go:130] > # metrics_socket = ""
	I0717 22:03:38.692180   34695 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 22:03:38.692188   34695 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 22:03:38.692196   34695 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 22:03:38.692201   34695 command_runner.go:130] > # certificate on any modification event.
	I0717 22:03:38.692207   34695 command_runner.go:130] > # metrics_cert = ""
	I0717 22:03:38.692212   34695 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 22:03:38.692219   34695 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 22:03:38.692223   34695 command_runner.go:130] > # metrics_key = ""
	I0717 22:03:38.692233   34695 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 22:03:38.692239   34695 command_runner.go:130] > [crio.tracing]
	I0717 22:03:38.692245   34695 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 22:03:38.692251   34695 command_runner.go:130] > # enable_tracing = false
	I0717 22:03:38.692256   34695 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 22:03:38.692263   34695 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 22:03:38.692268   34695 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 22:03:38.692274   34695 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 22:03:38.692280   34695 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 22:03:38.692286   34695 command_runner.go:130] > [crio.stats]
	I0717 22:03:38.692291   34695 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 22:03:38.692301   34695 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 22:03:38.692308   34695 command_runner.go:130] > # stats_collection_period = 0
	I0717 22:03:38.692997   34695 command_runner.go:130] ! time="2023-07-17 22:03:38.677105885Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0717 22:03:38.693016   34695 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 22:03:38.693081   34695 cni.go:84] Creating CNI manager for ""
	I0717 22:03:38.693092   34695 cni.go:137] 1 nodes found, recommending kindnet
	I0717 22:03:38.693100   34695 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:03:38.693119   34695 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-009530 NodeName:multinode-009530 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:03:38.693237   34695 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-009530"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:03:38.693300   34695 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-009530 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-009530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:03:38.693347   34695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:03:38.703859   34695 command_runner.go:130] > kubeadm
	I0717 22:03:38.703884   34695 command_runner.go:130] > kubectl
	I0717 22:03:38.703890   34695 command_runner.go:130] > kubelet
	I0717 22:03:38.704025   34695 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:03:38.704114   34695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:03:38.712710   34695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0717 22:03:38.729445   34695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:03:38.745773   34695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0717 22:03:38.762084   34695 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0717 22:03:38.765972   34695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:03:38.779053   34695 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530 for IP: 192.168.39.222
	I0717 22:03:38.779095   34695 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:03:38.779280   34695 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:03:38.779384   34695 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:03:38.779446   34695 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key
	I0717 22:03:38.779469   34695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt with IP's: []
	I0717 22:03:38.820675   34695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt ...
	I0717 22:03:38.820709   34695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt: {Name:mke290f9d3a9e2ef67b84c60792c65ac08a50448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:03:38.820888   34695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key ...
	I0717 22:03:38.820902   34695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key: {Name:mkc72d4691f2903cb26f7e89f67278e6114d06bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:03:38.820987   34695 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.key.ac9b12d1
	I0717 22:03:38.821003   34695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.crt.ac9b12d1 with IP's: [192.168.39.222 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 22:03:39.082353   34695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.crt.ac9b12d1 ...
	I0717 22:03:39.082382   34695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.crt.ac9b12d1: {Name:mkd892c1fa433dda3c2eb275d2def79b67b2aa23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:03:39.082535   34695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.key.ac9b12d1 ...
	I0717 22:03:39.082547   34695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.key.ac9b12d1: {Name:mk5241b5982d273df1eed456f243cdabe83e78d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:03:39.082617   34695 certs.go:337] copying /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.crt.ac9b12d1 -> /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.crt
	I0717 22:03:39.082680   34695 certs.go:341] copying /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.key.ac9b12d1 -> /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.key
	I0717 22:03:39.082726   34695 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.key
	I0717 22:03:39.082740   34695 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.crt with IP's: []
	I0717 22:03:39.342102   34695 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.crt ...
	I0717 22:03:39.342130   34695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.crt: {Name:mk507a6a89de9e921f5e83f99944c264c4736b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:03:39.342299   34695 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.key ...
	I0717 22:03:39.342315   34695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.key: {Name:mk62673a5f1d0d7d5e17b10e1ed5b31bac43bc23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:03:39.342379   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 22:03:39.342397   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 22:03:39.342407   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 22:03:39.342419   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 22:03:39.342430   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 22:03:39.342442   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 22:03:39.342457   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 22:03:39.342470   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 22:03:39.342517   34695 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:03:39.342549   34695 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:03:39.342557   34695 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:03:39.342578   34695 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:03:39.342600   34695 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:03:39.342631   34695 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:03:39.342667   34695 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:03:39.342691   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:03:39.342703   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem -> /usr/share/ca-certificates/22990.pem
	I0717 22:03:39.342715   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> /usr/share/ca-certificates/229902.pem
	I0717 22:03:39.343171   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:03:39.373557   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:03:39.399047   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:03:39.423349   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:03:39.447270   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:03:39.470729   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:03:39.495179   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:03:39.520618   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:03:39.546032   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:03:39.570922   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:03:39.596128   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:03:39.620850   34695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:03:39.637262   34695 ssh_runner.go:195] Run: openssl version
	I0717 22:03:39.643702   34695 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0717 22:03:39.643775   34695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:03:39.654403   34695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:03:39.659455   34695 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:03:39.659678   34695 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:03:39.659721   34695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:03:39.665524   34695 command_runner.go:130] > 51391683
	I0717 22:03:39.665606   34695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:03:39.675730   34695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:03:39.686000   34695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:03:39.690871   34695 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:03:39.690975   34695 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:03:39.691014   34695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:03:39.696419   34695 command_runner.go:130] > 3ec20f2e
	I0717 22:03:39.696733   34695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:03:39.706825   34695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:03:39.716662   34695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:03:39.721511   34695 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:03:39.721553   34695 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:03:39.721599   34695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:03:39.727362   34695 command_runner.go:130] > b5213941
	I0717 22:03:39.727442   34695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:03:39.737225   34695 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:03:39.741838   34695 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:03:39.741894   34695 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:03:39.741946   34695 kubeadm.go:404] StartCluster: {Name:multinode-009530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-009530 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:03:39.742043   34695 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:03:39.742089   34695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:03:39.772426   34695 cri.go:89] found id: ""
	I0717 22:03:39.772513   34695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:03:39.781197   34695 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0717 22:03:39.781222   34695 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0717 22:03:39.781227   34695 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0717 22:03:39.781401   34695 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:03:39.790236   34695 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:03:39.799083   34695 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0717 22:03:39.799110   34695 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0717 22:03:39.799122   34695 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0717 22:03:39.799135   34695 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:03:39.799172   34695 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:03:39.799203   34695 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 22:03:40.132023   34695 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:03:40.132055   34695 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:03:52.521105   34695 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 22:03:52.521146   34695 command_runner.go:130] > [init] Using Kubernetes version: v1.27.3
	I0717 22:03:52.521195   34695 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:03:52.521204   34695 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 22:03:52.521295   34695 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:03:52.521311   34695 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:03:52.521463   34695 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:03:52.521476   34695 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:03:52.521615   34695 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:03:52.521628   34695 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:03:52.521722   34695 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:03:52.523720   34695 out.go:204]   - Generating certificates and keys ...
	I0717 22:03:52.521768   34695 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:03:52.523821   34695 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:03:52.523842   34695 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0717 22:03:52.523928   34695 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:03:52.523936   34695 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0717 22:03:52.524037   34695 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 22:03:52.524059   34695 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 22:03:52.524131   34695 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 22:03:52.524145   34695 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0717 22:03:52.524237   34695 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 22:03:52.524249   34695 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0717 22:03:52.524332   34695 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 22:03:52.524349   34695 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0717 22:03:52.524409   34695 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 22:03:52.524422   34695 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0717 22:03:52.524565   34695 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-009530] and IPs [192.168.39.222 127.0.0.1 ::1]
	I0717 22:03:52.524578   34695 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-009530] and IPs [192.168.39.222 127.0.0.1 ::1]
	I0717 22:03:52.524643   34695 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 22:03:52.524656   34695 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0717 22:03:52.524806   34695 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-009530] and IPs [192.168.39.222 127.0.0.1 ::1]
	I0717 22:03:52.524815   34695 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-009530] and IPs [192.168.39.222 127.0.0.1 ::1]
	I0717 22:03:52.524895   34695 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 22:03:52.524905   34695 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 22:03:52.524990   34695 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 22:03:52.524998   34695 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 22:03:52.525060   34695 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 22:03:52.525070   34695 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0717 22:03:52.525145   34695 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:03:52.525162   34695 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:03:52.525241   34695 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:03:52.525261   34695 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:03:52.525345   34695 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:03:52.525370   34695 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:03:52.525452   34695 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:03:52.525462   34695 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:03:52.525554   34695 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:03:52.525565   34695 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:03:52.525748   34695 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:03:52.525767   34695 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:03:52.525894   34695 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:03:52.525912   34695 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:03:52.525960   34695 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:03:52.525971   34695 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 22:03:52.526065   34695 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:03:52.527960   34695 out.go:204]   - Booting up control plane ...
	I0717 22:03:52.526099   34695 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:03:52.528077   34695 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:03:52.528090   34695 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:03:52.528161   34695 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:03:52.528171   34695 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:03:52.528274   34695 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:03:52.528291   34695 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:03:52.528394   34695 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:03:52.528406   34695 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:03:52.528595   34695 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:03:52.528604   34695 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:03:52.528708   34695 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005103 seconds
	I0717 22:03:52.528726   34695 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.005103 seconds
	I0717 22:03:52.528887   34695 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:03:52.528906   34695 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:03:52.529034   34695 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:03:52.529041   34695 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:03:52.529120   34695 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:03:52.529134   34695 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:03:52.529369   34695 kubeadm.go:322] [mark-control-plane] Marking the node multinode-009530 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:03:52.529383   34695 command_runner.go:130] > [mark-control-plane] Marking the node multinode-009530 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:03:52.529446   34695 kubeadm.go:322] [bootstrap-token] Using token: nt0nhq.gvbjuoky524z1k9g
	I0717 22:03:52.531135   34695 out.go:204]   - Configuring RBAC rules ...
	I0717 22:03:52.529536   34695 command_runner.go:130] > [bootstrap-token] Using token: nt0nhq.gvbjuoky524z1k9g
	I0717 22:03:52.531270   34695 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:03:52.531290   34695 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:03:52.531410   34695 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:03:52.531411   34695 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:03:52.531618   34695 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:03:52.531630   34695 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:03:52.531787   34695 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:03:52.531797   34695 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:03:52.531958   34695 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:03:52.531981   34695 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:03:52.532067   34695 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:03:52.532077   34695 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:03:52.532243   34695 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:03:52.532249   34695 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:03:52.532287   34695 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:03:52.532293   34695 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0717 22:03:52.532348   34695 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:03:52.532365   34695 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0717 22:03:52.532374   34695 kubeadm.go:322] 
	I0717 22:03:52.532448   34695 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:03:52.532456   34695 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0717 22:03:52.532459   34695 kubeadm.go:322] 
	I0717 22:03:52.532555   34695 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:03:52.532567   34695 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0717 22:03:52.532577   34695 kubeadm.go:322] 
	I0717 22:03:52.532618   34695 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:03:52.532627   34695 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0717 22:03:52.532693   34695 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:03:52.532700   34695 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:03:52.532756   34695 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:03:52.532766   34695 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:03:52.532776   34695 kubeadm.go:322] 
	I0717 22:03:52.532841   34695 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 22:03:52.532847   34695 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0717 22:03:52.532851   34695 kubeadm.go:322] 
	I0717 22:03:52.532921   34695 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:03:52.532931   34695 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:03:52.532939   34695 kubeadm.go:322] 
	I0717 22:03:52.533019   34695 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:03:52.533028   34695 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0717 22:03:52.533114   34695 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:03:52.533124   34695 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:03:52.533195   34695 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:03:52.533212   34695 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:03:52.533230   34695 kubeadm.go:322] 
	I0717 22:03:52.533326   34695 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:03:52.533335   34695 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:03:52.533421   34695 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0717 22:03:52.533431   34695 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:03:52.533442   34695 kubeadm.go:322] 
	I0717 22:03:52.533551   34695 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token nt0nhq.gvbjuoky524z1k9g \
	I0717 22:03:52.533559   34695 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nt0nhq.gvbjuoky524z1k9g \
	I0717 22:03:52.533650   34695 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:03:52.533656   34695 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:03:52.533681   34695 command_runner.go:130] > 	--control-plane 
	I0717 22:03:52.533687   34695 kubeadm.go:322] 	--control-plane 
	I0717 22:03:52.533691   34695 kubeadm.go:322] 
	I0717 22:03:52.533768   34695 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:03:52.533776   34695 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:03:52.533781   34695 kubeadm.go:322] 
	I0717 22:03:52.533851   34695 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token nt0nhq.gvbjuoky524z1k9g \
	I0717 22:03:52.533856   34695 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nt0nhq.gvbjuoky524z1k9g \
	I0717 22:03:52.533947   34695 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:03:52.533964   34695 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:03:52.533982   34695 cni.go:84] Creating CNI manager for ""
	I0717 22:03:52.533995   34695 cni.go:137] 1 nodes found, recommending kindnet
	I0717 22:03:52.536050   34695 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 22:03:52.537579   34695 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 22:03:52.544117   34695 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 22:03:52.544142   34695 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0717 22:03:52.544151   34695 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0717 22:03:52.544157   34695 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:03:52.544171   34695 command_runner.go:130] > Access: 2023-07-17 22:03:20.325572299 +0000
	I0717 22:03:52.544176   34695 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0717 22:03:52.544181   34695 command_runner.go:130] > Change: 2023-07-17 22:03:18.497572299 +0000
	I0717 22:03:52.544185   34695 command_runner.go:130] >  Birth: -
	I0717 22:03:52.544385   34695 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 22:03:52.544402   34695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 22:03:52.616773   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 22:03:53.691597   34695 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0717 22:03:53.701957   34695 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0717 22:03:53.720304   34695 command_runner.go:130] > serviceaccount/kindnet created
	I0717 22:03:53.742192   34695 command_runner.go:130] > daemonset.apps/kindnet created
	I0717 22:03:53.745123   34695 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.128312031s)
	I0717 22:03:53.745172   34695 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:03:53.745258   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:53.745269   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=multinode-009530 minikube.k8s.io/updated_at=2023_07_17T22_03_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:53.949628   34695 command_runner.go:130] > node/multinode-009530 labeled
	I0717 22:03:53.951149   34695 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0717 22:03:53.951219   34695 command_runner.go:130] > -16
	I0717 22:03:53.951241   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:53.951257   34695 ops.go:34] apiserver oom_adj: -16
	I0717 22:03:54.037437   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:03:54.538525   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:54.616547   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:03:55.038236   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:55.120719   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:03:55.538492   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:55.625801   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:03:56.038447   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:56.125907   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:03:56.538229   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:56.623240   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:03:57.038570   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:57.121782   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:03:57.538351   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:57.632754   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:03:58.038410   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:58.126590   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:03:58.538188   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:58.633117   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:03:59.038006   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:59.129287   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:03:59.538741   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:03:59.627464   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:04:00.038655   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:04:00.123699   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:04:00.538694   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:04:00.638619   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:04:01.038162   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:04:01.142535   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:04:01.538620   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:04:01.642037   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:04:02.038078   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:04:02.137459   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:04:02.538088   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:04:02.650348   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:04:03.038184   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:04:03.145304   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:04:03.538734   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:04:03.638883   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:04:04.038909   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:04:04.128951   34695 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:04:04.538209   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:04:04.648510   34695 command_runner.go:130] > NAME      SECRETS   AGE
	I0717 22:04:04.648537   34695 command_runner.go:130] > default   0         0s
	I0717 22:04:04.651434   34695 kubeadm.go:1081] duration metric: took 10.906229378s to wait for elevateKubeSystemPrivileges.
	I0717 22:04:04.651455   34695 kubeadm.go:406] StartCluster complete in 24.909512171s
	I0717 22:04:04.651470   34695 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:04:04.651546   34695 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:04:04.652103   34695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:04:04.652337   34695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:04:04.652549   34695 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:04:04.652501   34695 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:04:04.652598   34695 addons.go:69] Setting default-storageclass=true in profile "multinode-009530"
	I0717 22:04:04.652596   34695 addons.go:69] Setting storage-provisioner=true in profile "multinode-009530"
	I0717 22:04:04.652614   34695 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-009530"
	I0717 22:04:04.652619   34695 addons.go:231] Setting addon storage-provisioner=true in "multinode-009530"
	I0717 22:04:04.652672   34695 host.go:66] Checking if "multinode-009530" exists ...
	I0717 22:04:04.652699   34695 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:04:04.653059   34695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:04:04.653001   34695 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:04:04.653091   34695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:04:04.653095   34695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:04:04.653122   34695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:04:04.653777   34695 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 22:04:04.654021   34695 round_trippers.go:463] GET https://192.168.39.222:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 22:04:04.654033   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:04.654044   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:04.654057   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:04.667807   34695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40697
	I0717 22:04:04.668219   34695 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0717 22:04:04.668239   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:04.668251   34695 round_trippers.go:580]     Content-Length: 291
	I0717 22:04:04.668259   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:04 GMT
	I0717 22:04:04.668264   34695 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:04:04.668267   34695 round_trippers.go:580]     Audit-Id: eda2844e-a199-4889-ab35-6880677430ca
	I0717 22:04:04.668275   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:04.668287   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:04.668297   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:04.668304   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:04.668748   34695 main.go:141] libmachine: Using API Version  1
	I0717 22:04:04.668768   34695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:04:04.668797   34695 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c60c6831-559f-4b19-8b15-656b8972a35c","resourceVersion":"262","creationTimestamp":"2023-07-17T22:03:52Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 22:04:04.669067   34695 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:04:04.669276   34695 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c60c6831-559f-4b19-8b15-656b8972a35c","resourceVersion":"262","creationTimestamp":"2023-07-17T22:03:52Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 22:04:04.669337   34695 round_trippers.go:463] PUT https://192.168.39.222:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 22:04:04.669348   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:04.669359   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:04.669371   34695 round_trippers.go:473]     Content-Type: application/json
	I0717 22:04:04.669383   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:04.669616   34695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:04:04.669660   34695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:04:04.671246   34695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43317
	I0717 22:04:04.671645   34695 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:04:04.672140   34695 main.go:141] libmachine: Using API Version  1
	I0717 22:04:04.672174   34695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:04:04.672568   34695 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:04:04.672783   34695 main.go:141] libmachine: (multinode-009530) Calling .GetState
	I0717 22:04:04.674981   34695 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:04:04.675287   34695 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:04:04.675691   34695 round_trippers.go:463] GET https://192.168.39.222:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 22:04:04.675708   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:04.675720   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:04.675730   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:04.681457   34695 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 22:04:04.681477   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:04.681487   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:04.681497   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:04.681505   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:04.681528   34695 round_trippers.go:580]     Content-Length: 109
	I0717 22:04:04.681536   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:04 GMT
	I0717 22:04:04.681548   34695 round_trippers.go:580]     Audit-Id: 49330932-897c-4e17-be26-d8fa653f97d2
	I0717 22:04:04.681560   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:04.681584   34695 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"349"},"items":[]}
	I0717 22:04:04.681861   34695 addons.go:231] Setting addon default-storageclass=true in "multinode-009530"
	I0717 22:04:04.681902   34695 host.go:66] Checking if "multinode-009530" exists ...
	I0717 22:04:04.682128   34695 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0717 22:04:04.682147   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:04.682158   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:04.682167   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:04.682184   34695 round_trippers.go:580]     Content-Length: 291
	I0717 22:04:04.682195   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:04 GMT
	I0717 22:04:04.682207   34695 round_trippers.go:580]     Audit-Id: 77892614-7a9d-45e5-b42c-4b7da79aa453
	I0717 22:04:04.682219   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:04.682230   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:04.682256   34695 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c60c6831-559f-4b19-8b15-656b8972a35c","resourceVersion":"349","creationTimestamp":"2023-07-17T22:03:52Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 22:04:04.682294   34695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:04:04.682328   34695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:04:04.684621   34695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0717 22:04:04.685068   34695 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:04:04.685604   34695 main.go:141] libmachine: Using API Version  1
	I0717 22:04:04.685632   34695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:04:04.685929   34695 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:04:04.686102   34695 main.go:141] libmachine: (multinode-009530) Calling .GetState
	I0717 22:04:04.687832   34695 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:04:04.690059   34695 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:04:04.691864   34695 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:04:04.691882   34695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:04:04.691900   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:04:04.694839   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:04:04.695299   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:04:04.695330   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:04:04.695526   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:04:04.695722   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:04:04.695909   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:04:04.696078   34695 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:04:04.698785   34695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0717 22:04:04.699119   34695 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:04:04.699523   34695 main.go:141] libmachine: Using API Version  1
	I0717 22:04:04.699540   34695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:04:04.699783   34695 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:04:04.700344   34695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:04:04.700380   34695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:04:04.714696   34695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0717 22:04:04.715157   34695 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:04:04.715677   34695 main.go:141] libmachine: Using API Version  1
	I0717 22:04:04.715705   34695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:04:04.716053   34695 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:04:04.716239   34695 main.go:141] libmachine: (multinode-009530) Calling .GetState
	I0717 22:04:04.717838   34695 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:04:04.718059   34695 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:04:04.718073   34695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:04:04.718084   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:04:04.721197   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:04:04.721644   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:04:04.721690   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:04:04.721847   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:04:04.722026   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:04:04.722191   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:04:04.722316   34695 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:04:04.786097   34695 command_runner.go:130] > apiVersion: v1
	I0717 22:04:04.786118   34695 command_runner.go:130] > data:
	I0717 22:04:04.786133   34695 command_runner.go:130] >   Corefile: |
	I0717 22:04:04.786137   34695 command_runner.go:130] >     .:53 {
	I0717 22:04:04.786144   34695 command_runner.go:130] >         errors
	I0717 22:04:04.786158   34695 command_runner.go:130] >         health {
	I0717 22:04:04.786168   34695 command_runner.go:130] >            lameduck 5s
	I0717 22:04:04.786178   34695 command_runner.go:130] >         }
	I0717 22:04:04.786187   34695 command_runner.go:130] >         ready
	I0717 22:04:04.786200   34695 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0717 22:04:04.786208   34695 command_runner.go:130] >            pods insecure
	I0717 22:04:04.786213   34695 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0717 22:04:04.786221   34695 command_runner.go:130] >            ttl 30
	I0717 22:04:04.786225   34695 command_runner.go:130] >         }
	I0717 22:04:04.786229   34695 command_runner.go:130] >         prometheus :9153
	I0717 22:04:04.786233   34695 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0717 22:04:04.786242   34695 command_runner.go:130] >            max_concurrent 1000
	I0717 22:04:04.786248   34695 command_runner.go:130] >         }
	I0717 22:04:04.786257   34695 command_runner.go:130] >         cache 30
	I0717 22:04:04.786264   34695 command_runner.go:130] >         loop
	I0717 22:04:04.786274   34695 command_runner.go:130] >         reload
	I0717 22:04:04.786284   34695 command_runner.go:130] >         loadbalance
	I0717 22:04:04.786293   34695 command_runner.go:130] >     }
	I0717 22:04:04.786300   34695 command_runner.go:130] > kind: ConfigMap
	I0717 22:04:04.786309   34695 command_runner.go:130] > metadata:
	I0717 22:04:04.786321   34695 command_runner.go:130] >   creationTimestamp: "2023-07-17T22:03:52Z"
	I0717 22:04:04.786328   34695 command_runner.go:130] >   name: coredns
	I0717 22:04:04.786332   34695 command_runner.go:130] >   namespace: kube-system
	I0717 22:04:04.786339   34695 command_runner.go:130] >   resourceVersion: "258"
	I0717 22:04:04.786346   34695 command_runner.go:130] >   uid: 74a460b6-e979-4777-9478-ab3352b785ed
	I0717 22:04:04.786475   34695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:04:04.865647   34695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:04:04.879480   34695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:04:05.182702   34695 round_trippers.go:463] GET https://192.168.39.222:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 22:04:05.182727   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:05.182738   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:05.182748   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:05.198908   34695 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0717 22:04:05.198940   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:05.198950   34695 round_trippers.go:580]     Audit-Id: c571100d-07c8-4ab0-af88-a98d60d032b2
	I0717 22:04:05.198958   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:05.198965   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:05.198973   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:05.198981   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:05.198988   34695 round_trippers.go:580]     Content-Length: 291
	I0717 22:04:05.198996   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:05 GMT
	I0717 22:04:05.199026   34695 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c60c6831-559f-4b19-8b15-656b8972a35c","resourceVersion":"360","creationTimestamp":"2023-07-17T22:03:52Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 22:04:05.199144   34695 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-009530" context rescaled to 1 replicas
	I0717 22:04:05.199177   34695 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:04:05.202160   34695 out.go:177] * Verifying Kubernetes components...
	I0717 22:04:05.203665   34695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:04:05.721205   34695 command_runner.go:130] > configmap/coredns replaced
	I0717 22:04:05.721275   34695 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 22:04:05.721306   34695 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0717 22:04:05.721364   34695 main.go:141] libmachine: Making call to close driver server
	I0717 22:04:05.721381   34695 main.go:141] libmachine: (multinode-009530) Calling .Close
	I0717 22:04:05.721700   34695 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:04:05.721716   34695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:04:05.721727   34695 main.go:141] libmachine: Making call to close driver server
	I0717 22:04:05.721735   34695 main.go:141] libmachine: (multinode-009530) Calling .Close
	I0717 22:04:05.721969   34695 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:04:05.722001   34695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:04:05.722014   34695 main.go:141] libmachine: Making call to close driver server
	I0717 22:04:05.722024   34695 main.go:141] libmachine: (multinode-009530) Calling .Close
	I0717 22:04:05.722022   34695 main.go:141] libmachine: (multinode-009530) DBG | Closing plugin on server side
	I0717 22:04:05.722248   34695 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:04:05.722266   34695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:04:05.905476   34695 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0717 22:04:05.916478   34695 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0717 22:04:05.926000   34695 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0717 22:04:05.934897   34695 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0717 22:04:05.952861   34695 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0717 22:04:05.971203   34695 command_runner.go:130] > pod/storage-provisioner created
	I0717 22:04:05.974200   34695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.094687346s)
	I0717 22:04:05.974247   34695 main.go:141] libmachine: Making call to close driver server
	I0717 22:04:05.974260   34695 main.go:141] libmachine: (multinode-009530) Calling .Close
	I0717 22:04:05.974565   34695 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:04:05.974588   34695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:04:05.974596   34695 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:04:05.974599   34695 main.go:141] libmachine: Making call to close driver server
	I0717 22:04:05.974738   34695 main.go:141] libmachine: (multinode-009530) Calling .Close
	I0717 22:04:05.974909   34695 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:04:05.975139   34695 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:04:05.975159   34695 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:04:05.975232   34695 node_ready.go:35] waiting up to 6m0s for node "multinode-009530" to be "Ready" ...
	I0717 22:04:05.976942   34695 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0717 22:04:05.975314   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:05.978568   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:05.978582   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:05.978582   34695 addons.go:502] enable addons completed in 1.326082935s: enabled=[default-storageclass storage-provisioner]
	I0717 22:04:05.978596   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:05.983289   34695 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:04:05.983307   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:05.983316   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:05.983324   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:05.983332   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:05.983343   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:05.983351   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:05 GMT
	I0717 22:04:05.983359   34695 round_trippers.go:580]     Audit-Id: e967c533-f203-43c2-a93e-8fa5ef78343b
	I0717 22:04:05.983491   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"356","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 22:04:06.484831   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:06.484857   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:06.484869   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:06.484877   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:06.488180   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:06.488199   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:06.488206   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:06.488211   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:06.488217   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:06 GMT
	I0717 22:04:06.488222   34695 round_trippers.go:580]     Audit-Id: 1eb1de7c-ec1d-4275-9b03-8283967275e4
	I0717 22:04:06.488227   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:06.488234   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:06.488807   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"356","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 22:04:06.984473   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:06.984505   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:06.984513   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:06.984520   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:06.987754   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:06.987777   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:06.987788   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:06.987793   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:06.987799   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:06 GMT
	I0717 22:04:06.987804   34695 round_trippers.go:580]     Audit-Id: bf45ca43-7694-44af-bc29-f0a09dfab200
	I0717 22:04:06.987809   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:06.987815   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:06.987912   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"356","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 22:04:07.484540   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:07.484562   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:07.484570   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:07.484576   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:07.487144   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:07.487163   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:07.487170   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:07.487175   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:07 GMT
	I0717 22:04:07.487181   34695 round_trippers.go:580]     Audit-Id: d67ddb95-792a-4029-b3f2-bed50adb664f
	I0717 22:04:07.487186   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:07.487194   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:07.487203   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:07.487377   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"356","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 22:04:07.985026   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:07.985049   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:07.985062   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:07.985072   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:07.987936   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:07.987959   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:07.987969   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:07.987977   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:07 GMT
	I0717 22:04:07.987984   34695 round_trippers.go:580]     Audit-Id: 22a84cf2-2d83-4bfc-ad95-644203d2ef9c
	I0717 22:04:07.987993   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:07.988000   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:07.988009   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:07.988150   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"356","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 22:04:07.988467   34695 node_ready.go:58] node "multinode-009530" has status "Ready":"False"
	I0717 22:04:08.484808   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:08.484831   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:08.484839   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:08.484845   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:08.487936   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:08.487958   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:08.487965   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:08 GMT
	I0717 22:04:08.487974   34695 round_trippers.go:580]     Audit-Id: b81cf325-62c1-434a-b2f9-5dd67fe4f062
	I0717 22:04:08.487984   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:08.487991   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:08.487998   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:08.488006   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:08.488504   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"356","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 22:04:08.985209   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:08.985231   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:08.985239   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:08.985246   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:08.988300   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:08.988335   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:08.988342   34695 round_trippers.go:580]     Audit-Id: 1cf3b5d1-f4ee-42ce-bee1-81892ab28b58
	I0717 22:04:08.988348   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:08.988353   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:08.988358   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:08.988363   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:08.988368   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:08 GMT
	I0717 22:04:08.988765   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"356","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 22:04:09.484390   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:09.484409   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:09.484417   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:09.484423   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:09.487171   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:09.487190   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:09.487198   34695 round_trippers.go:580]     Audit-Id: 2fe8d261-8b38-4051-8d67-4d2a4e24eaf7
	I0717 22:04:09.487204   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:09.487209   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:09.487215   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:09.487220   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:09.487225   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:09 GMT
	I0717 22:04:09.487908   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"356","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 22:04:09.984221   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:09.984245   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:09.984253   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:09.984259   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:09.987476   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:09.987498   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:09.987505   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:09.987510   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:09 GMT
	I0717 22:04:09.987516   34695 round_trippers.go:580]     Audit-Id: 4181344e-74ba-4fe3-a66f-5a58c4c690b9
	I0717 22:04:09.987521   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:09.987526   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:09.987531   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:09.988136   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"356","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 22:04:10.484860   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:10.484892   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:10.484903   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:10.484912   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:10.489970   34695 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 22:04:10.490000   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:10.490010   34695 round_trippers.go:580]     Audit-Id: d2ca8aba-955c-4e31-b3ab-0c303621d4bd
	I0717 22:04:10.490020   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:10.490029   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:10.490037   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:10.490047   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:10.490056   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:10 GMT
	I0717 22:04:10.490229   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"356","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0717 22:04:10.490542   34695 node_ready.go:58] node "multinode-009530" has status "Ready":"False"
	I0717 22:04:10.984976   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:10.985005   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:10.985020   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:10.985032   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:10.989564   34695 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:04:10.989590   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:10.989600   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:10.989609   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:10.989617   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:10 GMT
	I0717 22:04:10.989626   34695 round_trippers.go:580]     Audit-Id: 19e498e5-2e29-4f98-97e4-e74da0acf9cb
	I0717 22:04:10.989633   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:10.989642   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:10.989990   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"425","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5971 chars]
	I0717 22:04:10.990423   34695 node_ready.go:49] node "multinode-009530" has status "Ready":"True"
	I0717 22:04:10.990444   34695 node_ready.go:38] duration metric: took 5.01519341s waiting for node "multinode-009530" to be "Ready" ...
	I0717 22:04:10.990454   34695 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:04:10.990522   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:04:10.990534   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:10.990544   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:10.990559   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:10.997195   34695 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 22:04:10.997215   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:10.997224   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:10.997232   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:10 GMT
	I0717 22:04:10.997240   34695 round_trippers.go:580]     Audit-Id: d0c42fef-d187-4cd3-8938-33947c68649d
	I0717 22:04:10.997247   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:10.997254   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:10.997262   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:10.998100   34695 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"392","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52218 chars]
	I0717 22:04:11.000958   34695 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:11.001038   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:04:11.001049   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:11.001056   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:11.001063   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:11.020509   34695 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0717 22:04:11.020537   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:11.020546   34695 round_trippers.go:580]     Audit-Id: 3c355dc5-f3b3-4ae5-a174-8c1be58b8b7f
	I0717 22:04:11.020552   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:11.020557   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:11.020563   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:11.020568   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:11.020573   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:11 GMT
	I0717 22:04:11.021646   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"427","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 4762 chars]
	I0717 22:04:11.022031   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:11.022048   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:11.022058   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:11.022066   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:11.027579   34695 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 22:04:11.027601   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:11.027611   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:11.027620   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:11.027629   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:11 GMT
	I0717 22:04:11.027637   34695 round_trippers.go:580]     Audit-Id: c22e2ee7-e69d-47b6-b772-d79b3f1879c7
	I0717 22:04:11.027645   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:11.027663   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:11.027801   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:11.528620   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:04:11.528649   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:11.528657   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:11.528663   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:11.532000   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:11.532025   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:11.532034   34695 round_trippers.go:580]     Audit-Id: 8aed7146-392f-4796-9170-a1a5d6ca22ee
	I0717 22:04:11.532043   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:11.532053   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:11.532063   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:11.532085   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:11.532096   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:11 GMT
	I0717 22:04:11.532194   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"431","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0717 22:04:11.532596   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:11.532609   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:11.532616   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:11.532622   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:11.534857   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:11.534877   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:11.534884   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:11 GMT
	I0717 22:04:11.534890   34695 round_trippers.go:580]     Audit-Id: 6b820d2b-6a05-48ee-a80d-7368e5f66f83
	I0717 22:04:11.534895   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:11.534900   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:11.534906   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:11.534912   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:11.535072   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:12.028714   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:04:12.028738   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:12.028746   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:12.028752   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:12.031639   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:12.031657   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:12.031664   34695 round_trippers.go:580]     Audit-Id: 0d77546d-f78e-4319-9cdb-b25995f35436
	I0717 22:04:12.031670   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:12.031675   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:12.031680   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:12.031685   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:12.031690   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:12 GMT
	I0717 22:04:12.031850   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"431","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0717 22:04:12.032258   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:12.032270   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:12.032277   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:12.032283   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:12.036746   34695 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:04:12.036763   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:12.036769   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:12.036775   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:12.036780   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:12 GMT
	I0717 22:04:12.036785   34695 round_trippers.go:580]     Audit-Id: 96c7171c-cc3a-4525-8a04-dba29e81d2be
	I0717 22:04:12.036791   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:12.036796   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:12.036952   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:12.528554   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:04:12.528586   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:12.528594   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:12.528609   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:12.532843   34695 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:04:12.532862   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:12.532872   34695 round_trippers.go:580]     Audit-Id: 8a3dea6d-5ded-4c24-8363-0036a7425f0e
	I0717 22:04:12.532877   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:12.532882   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:12.532888   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:12.532893   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:12.532898   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:12 GMT
	I0717 22:04:12.533239   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"431","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0717 22:04:12.533660   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:12.533671   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:12.533678   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:12.533684   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:12.541309   34695 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 22:04:12.541326   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:12.541333   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:12.541341   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:12.541350   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:12.541362   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:12.541376   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:12 GMT
	I0717 22:04:12.541385   34695 round_trippers.go:580]     Audit-Id: 00beea8f-7785-4801-9a03-e5df37ba6172
	I0717 22:04:12.541590   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:13.028925   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:04:13.028950   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.028964   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.028974   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.031644   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:13.031669   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.031680   34695 round_trippers.go:580]     Audit-Id: 19a763ea-de08-440a-bb37-293219e940f0
	I0717 22:04:13.031689   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.031697   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.031705   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.031715   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.031723   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.032053   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"446","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0717 22:04:13.032472   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:13.032497   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.032504   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.032510   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.034775   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:13.034795   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.034803   34695 round_trippers.go:580]     Audit-Id: 2ef73513-39fe-42f4-b15f-6872ff21cfe6
	I0717 22:04:13.034809   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.034818   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.034827   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.034836   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.034844   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.035150   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:13.035468   34695 pod_ready.go:92] pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:13.035483   34695 pod_ready.go:81] duration metric: took 2.034504128s waiting for pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:13.035491   34695 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:13.035536   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-009530
	I0717 22:04:13.035545   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.035552   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.035558   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.037926   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:13.037946   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.037969   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.037984   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.037992   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.038002   34695 round_trippers.go:580]     Audit-Id: 0a942ef2-806b-46b9-84e7-e7e5a1a852ee
	I0717 22:04:13.038016   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.038027   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.038243   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-009530","namespace":"kube-system","uid":"aed75ad9-0156-4275-8a41-b68d09c15660","resourceVersion":"444","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.222:2379","kubernetes.io/config.hash":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.mirror":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.seen":"2023-07-17T22:03:52.473671520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0717 22:04:13.038616   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:13.038630   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.038637   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.038643   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.040995   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:13.041014   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.041023   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.041031   34695 round_trippers.go:580]     Audit-Id: 4394c986-c77d-4ef8-a03c-6cdd94f8548f
	I0717 22:04:13.041040   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.041053   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.041061   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.041072   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.041506   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:13.041794   34695 pod_ready.go:92] pod "etcd-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:13.041808   34695 pod_ready.go:81] duration metric: took 6.310989ms waiting for pod "etcd-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:13.041819   34695 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:13.041857   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-009530
	I0717 22:04:13.041864   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.041870   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.041876   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.043715   34695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:04:13.043733   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.043742   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.043751   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.043758   34695 round_trippers.go:580]     Audit-Id: a530f5e9-b849-402e-be8e-62d1083a375d
	I0717 22:04:13.043769   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.043780   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.043790   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.043934   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-009530","namespace":"kube-system","uid":"958b1550-f15f-49f3-acf3-dbab69f82fb8","resourceVersion":"442","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.222:8443","kubernetes.io/config.hash":"49e7615bd1aa66d6e32161e120c48180","kubernetes.io/config.mirror":"49e7615bd1aa66d6e32161e120c48180","kubernetes.io/config.seen":"2023-07-17T22:03:52.473675304Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0717 22:04:13.044319   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:13.044331   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.044338   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.044348   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.046123   34695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:04:13.046148   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.046154   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.046160   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.046168   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.046177   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.046191   34695 round_trippers.go:580]     Audit-Id: 556c3073-54bb-4fb8-aa74-4b90cdef44c9
	I0717 22:04:13.046203   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.046577   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:13.046933   34695 pod_ready.go:92] pod "kube-apiserver-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:13.046947   34695 pod_ready.go:81] duration metric: took 5.120112ms waiting for pod "kube-apiserver-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:13.046956   34695 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:13.047006   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-009530
	I0717 22:04:13.047016   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.047026   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.047036   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.049093   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:13.049108   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.049116   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.049124   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.049133   34695 round_trippers.go:580]     Audit-Id: 415a4245-b4fb-4503-a0fc-02c3a4ecb62d
	I0717 22:04:13.049142   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.049151   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.049160   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.049381   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-009530","namespace":"kube-system","uid":"1c9dba7c-6497-41f0-b751-17988278c710","resourceVersion":"443","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d8b61663949a18745a23bcf487c538f2","kubernetes.io/config.mirror":"d8b61663949a18745a23bcf487c538f2","kubernetes.io/config.seen":"2023-07-17T22:03:52.473676600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0717 22:04:13.049750   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:13.049762   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.049769   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.049778   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.051753   34695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:04:13.051772   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.051781   34695 round_trippers.go:580]     Audit-Id: 6bd1dc56-a025-4e11-b578-02caf74b1f0d
	I0717 22:04:13.051789   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.051799   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.051813   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.051823   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.051836   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.051940   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:13.052274   34695 pod_ready.go:92] pod "kube-controller-manager-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:13.052290   34695 pod_ready.go:81] duration metric: took 5.326163ms waiting for pod "kube-controller-manager-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:13.052302   34695 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m5spw" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:13.052345   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5spw
	I0717 22:04:13.052355   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.052365   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.052377   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.054318   34695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:04:13.054332   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.054338   34695 round_trippers.go:580]     Audit-Id: 49b6d736-100d-47f5-a014-fa14b5cf6188
	I0717 22:04:13.054344   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.054350   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.054359   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.054370   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.054382   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.054478   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m5spw","generateName":"kube-proxy-","namespace":"kube-system","uid":"a4bf0eb3-126a-463e-a670-b4793e1c5ec9","resourceVersion":"415","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 22:04:13.054803   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:13.054814   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.054820   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.054826   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.056442   34695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:04:13.056454   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.056459   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.056465   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.056470   34695 round_trippers.go:580]     Audit-Id: 8e8b7fe4-1da8-4c06-964f-5c32339cd6e2
	I0717 22:04:13.056475   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.056480   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.056486   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.056765   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:13.057017   34695 pod_ready.go:92] pod "kube-proxy-m5spw" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:13.057029   34695 pod_ready.go:81] duration metric: took 4.722251ms waiting for pod "kube-proxy-m5spw" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:13.057036   34695 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:13.229456   34695 request.go:628] Waited for 172.369459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-009530
	I0717 22:04:13.229546   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-009530
	I0717 22:04:13.229553   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.229564   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.229575   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.232405   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:13.232422   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.232428   34695 round_trippers.go:580]     Audit-Id: 2e3ac1ad-79b9-40d3-9970-2a0949c34358
	I0717 22:04:13.232434   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.232439   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.232444   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.232450   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.232459   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.232612   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-009530","namespace":"kube-system","uid":"5da85194-923d-40f6-ab44-86209b1f057d","resourceVersion":"441","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"036d300e0ec7bf28a26e0c644008bbd5","kubernetes.io/config.mirror":"036d300e0ec7bf28a26e0c644008bbd5","kubernetes.io/config.seen":"2023-07-17T22:03:52.473677561Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0717 22:04:13.429458   34695 request.go:628] Waited for 196.407808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:13.429505   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:13.429510   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.429527   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.429533   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.432497   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:13.432516   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.432526   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.432534   34695 round_trippers.go:580]     Audit-Id: 1baaae82-f22d-4949-a5ff-d0d37acd70df
	I0717 22:04:13.432545   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.432552   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.432560   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.432569   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.432719   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:13.433036   34695 pod_ready.go:92] pod "kube-scheduler-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:13.433051   34695 pod_ready.go:81] duration metric: took 376.009264ms waiting for pod "kube-scheduler-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:13.433062   34695 pod_ready.go:38] duration metric: took 2.442595809s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:04:13.433082   34695 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:04:13.433124   34695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:04:13.446683   34695 command_runner.go:130] > 1068
	I0717 22:04:13.446778   34695 api_server.go:72] duration metric: took 8.247551022s to wait for apiserver process to appear ...
	I0717 22:04:13.446798   34695 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:04:13.446816   34695 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I0717 22:04:13.453439   34695 api_server.go:279] https://192.168.39.222:8443/healthz returned 200:
	ok
	I0717 22:04:13.453501   34695 round_trippers.go:463] GET https://192.168.39.222:8443/version
	I0717 22:04:13.453512   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.453538   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.453556   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.454771   34695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:04:13.454787   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.454794   34695 round_trippers.go:580]     Audit-Id: 0202351e-a6c4-48cd-ac21-19bc5f8a280f
	I0717 22:04:13.454800   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.454805   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.454811   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.454817   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.454826   34695 round_trippers.go:580]     Content-Length: 263
	I0717 22:04:13.454831   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.454938   34695 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 22:04:13.455028   34695 api_server.go:141] control plane version: v1.27.3
	I0717 22:04:13.455045   34695 api_server.go:131] duration metric: took 8.240996ms to wait for apiserver health ...
	I0717 22:04:13.455054   34695 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:04:13.629491   34695 request.go:628] Waited for 174.368903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:04:13.629624   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:04:13.629639   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.629647   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.629653   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.633323   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:13.633342   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.633349   34695 round_trippers.go:580]     Audit-Id: 50415f98-234b-4a92-b6c4-b09ce98d9037
	I0717 22:04:13.633354   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.633359   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.633365   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.633370   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.633377   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.634490   34695 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"446","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53997 chars]
	I0717 22:04:13.637668   34695 system_pods.go:59] 8 kube-system pods found
	I0717 22:04:13.637690   34695 system_pods.go:61] "coredns-5d78c9869d-z4fr8" [1fb1d992-a7b6-4259-ba61-dc4092c65c44] Running
	I0717 22:04:13.637696   34695 system_pods.go:61] "etcd-multinode-009530" [aed75ad9-0156-4275-8a41-b68d09c15660] Running
	I0717 22:04:13.637700   34695 system_pods.go:61] "kindnet-gh4hn" [d474f5c5-bd94-411b-8d69-b3871c2b5653] Running
	I0717 22:04:13.637703   34695 system_pods.go:61] "kube-apiserver-multinode-009530" [958b1550-f15f-49f3-acf3-dbab69f82fb8] Running
	I0717 22:04:13.637707   34695 system_pods.go:61] "kube-controller-manager-multinode-009530" [1c9dba7c-6497-41f0-b751-17988278c710] Running
	I0717 22:04:13.637711   34695 system_pods.go:61] "kube-proxy-m5spw" [a4bf0eb3-126a-463e-a670-b4793e1c5ec9] Running
	I0717 22:04:13.637715   34695 system_pods.go:61] "kube-scheduler-multinode-009530" [5da85194-923d-40f6-ab44-86209b1f057d] Running
	I0717 22:04:13.637719   34695 system_pods.go:61] "storage-provisioner" [d8f48e9c-2b37-4edc-89e4-d032cac0d573] Running
	I0717 22:04:13.637723   34695 system_pods.go:74] duration metric: took 182.663826ms to wait for pod list to return data ...
	I0717 22:04:13.637730   34695 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:04:13.829045   34695 request.go:628] Waited for 191.257307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/default/serviceaccounts
	I0717 22:04:13.829106   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/default/serviceaccounts
	I0717 22:04:13.829111   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:13.829118   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:13.829125   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:13.831890   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:13.831905   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:13.831912   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:13 GMT
	I0717 22:04:13.831918   34695 round_trippers.go:580]     Audit-Id: 75946605-a849-450d-a5b3-add8acde8c65
	I0717 22:04:13.831923   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:13.831928   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:13.831940   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:13.831959   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:13.831968   34695 round_trippers.go:580]     Content-Length: 261
	I0717 22:04:13.831994   34695 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"558ff881-614f-4fb6-9e77-8488151c76a7","resourceVersion":"345","creationTimestamp":"2023-07-17T22:04:04Z"}}]}
	I0717 22:04:13.832190   34695 default_sa.go:45] found service account: "default"
	I0717 22:04:13.832206   34695 default_sa.go:55] duration metric: took 194.471465ms for default service account to be created ...
	I0717 22:04:13.832213   34695 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:04:14.029692   34695 request.go:628] Waited for 197.419347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:04:14.029758   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:04:14.029762   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:14.029772   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:14.029779   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:14.033643   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:14.033666   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:14.033674   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:14.033679   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:14.033684   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:14.033690   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:14.033695   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:14 GMT
	I0717 22:04:14.033700   34695 round_trippers.go:580]     Audit-Id: 81fef6f7-3f20-407b-bef3-ffa57e593805
	I0717 22:04:14.034198   34695 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"452"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"446","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53997 chars]
	I0717 22:04:14.035864   34695 system_pods.go:86] 8 kube-system pods found
	I0717 22:04:14.035889   34695 system_pods.go:89] "coredns-5d78c9869d-z4fr8" [1fb1d992-a7b6-4259-ba61-dc4092c65c44] Running
	I0717 22:04:14.035894   34695 system_pods.go:89] "etcd-multinode-009530" [aed75ad9-0156-4275-8a41-b68d09c15660] Running
	I0717 22:04:14.035898   34695 system_pods.go:89] "kindnet-gh4hn" [d474f5c5-bd94-411b-8d69-b3871c2b5653] Running
	I0717 22:04:14.035904   34695 system_pods.go:89] "kube-apiserver-multinode-009530" [958b1550-f15f-49f3-acf3-dbab69f82fb8] Running
	I0717 22:04:14.035913   34695 system_pods.go:89] "kube-controller-manager-multinode-009530" [1c9dba7c-6497-41f0-b751-17988278c710] Running
	I0717 22:04:14.035928   34695 system_pods.go:89] "kube-proxy-m5spw" [a4bf0eb3-126a-463e-a670-b4793e1c5ec9] Running
	I0717 22:04:14.035936   34695 system_pods.go:89] "kube-scheduler-multinode-009530" [5da85194-923d-40f6-ab44-86209b1f057d] Running
	I0717 22:04:14.035943   34695 system_pods.go:89] "storage-provisioner" [d8f48e9c-2b37-4edc-89e4-d032cac0d573] Running
	I0717 22:04:14.035956   34695 system_pods.go:126] duration metric: took 203.737251ms to wait for k8s-apps to be running ...
	I0717 22:04:14.035966   34695 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:04:14.036013   34695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:04:14.051705   34695 system_svc.go:56] duration metric: took 15.728401ms WaitForService to wait for kubelet.
	I0717 22:04:14.051733   34695 kubeadm.go:581] duration metric: took 8.852509688s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:04:14.051751   34695 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:04:14.229135   34695 request.go:628] Waited for 177.305475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes
	I0717 22:04:14.229194   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes
	I0717 22:04:14.229199   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:14.229206   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:14.229213   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:14.232106   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:14.232126   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:14.232132   34695 round_trippers.go:580]     Audit-Id: 0f4e0548-4969-4cbc-b67e-2b6db222c826
	I0717 22:04:14.232138   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:14.232143   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:14.232149   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:14.232154   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:14.232161   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:14 GMT
	I0717 22:04:14.232495   34695 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"452"},"items":[{"metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I0717 22:04:14.232912   34695 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:04:14.232936   34695 node_conditions.go:123] node cpu capacity is 2
	I0717 22:04:14.232948   34695 node_conditions.go:105] duration metric: took 181.193157ms to run NodePressure ...
	I0717 22:04:14.232958   34695 start.go:228] waiting for startup goroutines ...
	I0717 22:04:14.232965   34695 start.go:233] waiting for cluster config update ...
	I0717 22:04:14.232973   34695 start.go:242] writing updated cluster config ...
	I0717 22:04:14.235503   34695 out.go:177] 
	I0717 22:04:14.237368   34695 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:04:14.237448   34695 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/config.json ...
	I0717 22:04:14.239329   34695 out.go:177] * Starting worker node multinode-009530-m02 in cluster multinode-009530
	I0717 22:04:14.240686   34695 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:04:14.240717   34695 cache.go:57] Caching tarball of preloaded images
	I0717 22:04:14.240815   34695 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 22:04:14.240826   34695 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 22:04:14.240907   34695 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/config.json ...
	I0717 22:04:14.241065   34695 start.go:365] acquiring machines lock for multinode-009530-m02: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:04:14.241105   34695 start.go:369] acquired machines lock for "multinode-009530-m02" in 22.296µs
	I0717 22:04:14.241127   34695 start.go:93] Provisioning new machine with config: &{Name:multinode-009530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-0
09530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 22:04:14.241196   34695 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0717 22:04:14.243101   34695 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 22:04:14.243225   34695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:04:14.243270   34695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:04:14.257528   34695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I0717 22:04:14.258002   34695 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:04:14.258451   34695 main.go:141] libmachine: Using API Version  1
	I0717 22:04:14.258472   34695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:04:14.258760   34695 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:04:14.258958   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetMachineName
	I0717 22:04:14.259140   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:04:14.259283   34695 start.go:159] libmachine.API.Create for "multinode-009530" (driver="kvm2")
	I0717 22:04:14.259312   34695 client.go:168] LocalClient.Create starting
	I0717 22:04:14.259352   34695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem
	I0717 22:04:14.259394   34695 main.go:141] libmachine: Decoding PEM data...
	I0717 22:04:14.259418   34695 main.go:141] libmachine: Parsing certificate...
	I0717 22:04:14.259487   34695 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem
	I0717 22:04:14.259516   34695 main.go:141] libmachine: Decoding PEM data...
	I0717 22:04:14.259542   34695 main.go:141] libmachine: Parsing certificate...
	I0717 22:04:14.259571   34695 main.go:141] libmachine: Running pre-create checks...
	I0717 22:04:14.259585   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .PreCreateCheck
	I0717 22:04:14.259740   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetConfigRaw
	I0717 22:04:14.260110   34695 main.go:141] libmachine: Creating machine...
	I0717 22:04:14.260126   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .Create
	I0717 22:04:14.260330   34695 main.go:141] libmachine: (multinode-009530-m02) Creating KVM machine...
	I0717 22:04:14.261605   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found existing default KVM network
	I0717 22:04:14.261754   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found existing private KVM network mk-multinode-009530
	I0717 22:04:14.261882   34695 main.go:141] libmachine: (multinode-009530-m02) Setting up store path in /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02 ...
	I0717 22:04:14.261909   34695 main.go:141] libmachine: (multinode-009530-m02) Building disk image from file:///home/jenkins/minikube-integration/16899-15759/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 22:04:14.261961   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:14.261865   35047 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:04:14.262085   34695 main.go:141] libmachine: (multinode-009530-m02) Downloading /home/jenkins/minikube-integration/16899-15759/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16899-15759/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 22:04:14.463233   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:14.463125   35047 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/id_rsa...
	I0717 22:04:14.567832   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:14.567690   35047 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/multinode-009530-m02.rawdisk...
	I0717 22:04:14.567867   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Writing magic tar header
	I0717 22:04:14.567882   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Writing SSH key tar header
	I0717 22:04:14.567896   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:14.567826   35047 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02 ...
	I0717 22:04:14.567915   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02
	I0717 22:04:14.567982   34695 main.go:141] libmachine: (multinode-009530-m02) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02 (perms=drwx------)
	I0717 22:04:14.568007   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube/machines
	I0717 22:04:14.568016   34695 main.go:141] libmachine: (multinode-009530-m02) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube/machines (perms=drwxr-xr-x)
	I0717 22:04:14.568032   34695 main.go:141] libmachine: (multinode-009530-m02) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube (perms=drwxr-xr-x)
	I0717 22:04:14.568047   34695 main.go:141] libmachine: (multinode-009530-m02) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759 (perms=drwxrwxr-x)
	I0717 22:04:14.568065   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:04:14.568082   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759
	I0717 22:04:14.568097   34695 main.go:141] libmachine: (multinode-009530-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 22:04:14.568109   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 22:04:14.568122   34695 main.go:141] libmachine: (multinode-009530-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 22:04:14.568135   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Checking permissions on dir: /home/jenkins
	I0717 22:04:14.568152   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Checking permissions on dir: /home
	I0717 22:04:14.568165   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Skipping /home - not owner
	I0717 22:04:14.568183   34695 main.go:141] libmachine: (multinode-009530-m02) Creating domain...
	I0717 22:04:14.569051   34695 main.go:141] libmachine: (multinode-009530-m02) define libvirt domain using xml: 
	I0717 22:04:14.569073   34695 main.go:141] libmachine: (multinode-009530-m02) <domain type='kvm'>
	I0717 22:04:14.569081   34695 main.go:141] libmachine: (multinode-009530-m02)   <name>multinode-009530-m02</name>
	I0717 22:04:14.569088   34695 main.go:141] libmachine: (multinode-009530-m02)   <memory unit='MiB'>2200</memory>
	I0717 22:04:14.569097   34695 main.go:141] libmachine: (multinode-009530-m02)   <vcpu>2</vcpu>
	I0717 22:04:14.569104   34695 main.go:141] libmachine: (multinode-009530-m02)   <features>
	I0717 22:04:14.569110   34695 main.go:141] libmachine: (multinode-009530-m02)     <acpi/>
	I0717 22:04:14.569123   34695 main.go:141] libmachine: (multinode-009530-m02)     <apic/>
	I0717 22:04:14.569132   34695 main.go:141] libmachine: (multinode-009530-m02)     <pae/>
	I0717 22:04:14.569140   34695 main.go:141] libmachine: (multinode-009530-m02)     
	I0717 22:04:14.569152   34695 main.go:141] libmachine: (multinode-009530-m02)   </features>
	I0717 22:04:14.569160   34695 main.go:141] libmachine: (multinode-009530-m02)   <cpu mode='host-passthrough'>
	I0717 22:04:14.569181   34695 main.go:141] libmachine: (multinode-009530-m02)   
	I0717 22:04:14.569206   34695 main.go:141] libmachine: (multinode-009530-m02)   </cpu>
	I0717 22:04:14.569221   34695 main.go:141] libmachine: (multinode-009530-m02)   <os>
	I0717 22:04:14.569234   34695 main.go:141] libmachine: (multinode-009530-m02)     <type>hvm</type>
	I0717 22:04:14.569248   34695 main.go:141] libmachine: (multinode-009530-m02)     <boot dev='cdrom'/>
	I0717 22:04:14.569258   34695 main.go:141] libmachine: (multinode-009530-m02)     <boot dev='hd'/>
	I0717 22:04:14.569268   34695 main.go:141] libmachine: (multinode-009530-m02)     <bootmenu enable='no'/>
	I0717 22:04:14.569298   34695 main.go:141] libmachine: (multinode-009530-m02)   </os>
	I0717 22:04:14.569323   34695 main.go:141] libmachine: (multinode-009530-m02)   <devices>
	I0717 22:04:14.569341   34695 main.go:141] libmachine: (multinode-009530-m02)     <disk type='file' device='cdrom'>
	I0717 22:04:14.569361   34695 main.go:141] libmachine: (multinode-009530-m02)       <source file='/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/boot2docker.iso'/>
	I0717 22:04:14.569377   34695 main.go:141] libmachine: (multinode-009530-m02)       <target dev='hdc' bus='scsi'/>
	I0717 22:04:14.569390   34695 main.go:141] libmachine: (multinode-009530-m02)       <readonly/>
	I0717 22:04:14.569403   34695 main.go:141] libmachine: (multinode-009530-m02)     </disk>
	I0717 22:04:14.569420   34695 main.go:141] libmachine: (multinode-009530-m02)     <disk type='file' device='disk'>
	I0717 22:04:14.569436   34695 main.go:141] libmachine: (multinode-009530-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 22:04:14.569456   34695 main.go:141] libmachine: (multinode-009530-m02)       <source file='/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/multinode-009530-m02.rawdisk'/>
	I0717 22:04:14.569470   34695 main.go:141] libmachine: (multinode-009530-m02)       <target dev='hda' bus='virtio'/>
	I0717 22:04:14.569485   34695 main.go:141] libmachine: (multinode-009530-m02)     </disk>
	I0717 22:04:14.569501   34695 main.go:141] libmachine: (multinode-009530-m02)     <interface type='network'>
	I0717 22:04:14.569531   34695 main.go:141] libmachine: (multinode-009530-m02)       <source network='mk-multinode-009530'/>
	I0717 22:04:14.569546   34695 main.go:141] libmachine: (multinode-009530-m02)       <model type='virtio'/>
	I0717 22:04:14.569557   34695 main.go:141] libmachine: (multinode-009530-m02)     </interface>
	I0717 22:04:14.569568   34695 main.go:141] libmachine: (multinode-009530-m02)     <interface type='network'>
	I0717 22:04:14.569578   34695 main.go:141] libmachine: (multinode-009530-m02)       <source network='default'/>
	I0717 22:04:14.569592   34695 main.go:141] libmachine: (multinode-009530-m02)       <model type='virtio'/>
	I0717 22:04:14.569599   34695 main.go:141] libmachine: (multinode-009530-m02)     </interface>
	I0717 22:04:14.569605   34695 main.go:141] libmachine: (multinode-009530-m02)     <serial type='pty'>
	I0717 22:04:14.569614   34695 main.go:141] libmachine: (multinode-009530-m02)       <target port='0'/>
	I0717 22:04:14.569622   34695 main.go:141] libmachine: (multinode-009530-m02)     </serial>
	I0717 22:04:14.569628   34695 main.go:141] libmachine: (multinode-009530-m02)     <console type='pty'>
	I0717 22:04:14.569633   34695 main.go:141] libmachine: (multinode-009530-m02)       <target type='serial' port='0'/>
	I0717 22:04:14.569639   34695 main.go:141] libmachine: (multinode-009530-m02)     </console>
	I0717 22:04:14.569644   34695 main.go:141] libmachine: (multinode-009530-m02)     <rng model='virtio'>
	I0717 22:04:14.569653   34695 main.go:141] libmachine: (multinode-009530-m02)       <backend model='random'>/dev/random</backend>
	I0717 22:04:14.569661   34695 main.go:141] libmachine: (multinode-009530-m02)     </rng>
	I0717 22:04:14.569667   34695 main.go:141] libmachine: (multinode-009530-m02)     
	I0717 22:04:14.569679   34695 main.go:141] libmachine: (multinode-009530-m02)     
	I0717 22:04:14.569687   34695 main.go:141] libmachine: (multinode-009530-m02)   </devices>
	I0717 22:04:14.569695   34695 main.go:141] libmachine: (multinode-009530-m02) </domain>
	I0717 22:04:14.569702   34695 main.go:141] libmachine: (multinode-009530-m02) 
	I0717 22:04:14.576530   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:ee:ce:7c in network default
	I0717 22:04:14.577099   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:14.577113   34695 main.go:141] libmachine: (multinode-009530-m02) Ensuring networks are active...
	I0717 22:04:14.577905   34695 main.go:141] libmachine: (multinode-009530-m02) Ensuring network default is active
	I0717 22:04:14.578270   34695 main.go:141] libmachine: (multinode-009530-m02) Ensuring network mk-multinode-009530 is active
	I0717 22:04:14.578606   34695 main.go:141] libmachine: (multinode-009530-m02) Getting domain xml...
	I0717 22:04:14.579318   34695 main.go:141] libmachine: (multinode-009530-m02) Creating domain...
	I0717 22:04:14.962914   34695 main.go:141] libmachine: (multinode-009530-m02) Waiting to get IP...
	I0717 22:04:14.963658   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:14.964027   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:14.964103   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:14.964035   35047 retry.go:31] will retry after 282.263294ms: waiting for machine to come up
	I0717 22:04:15.247503   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:15.247904   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:15.247938   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:15.247859   35047 retry.go:31] will retry after 316.914476ms: waiting for machine to come up
	I0717 22:04:15.566476   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:15.566940   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:15.566966   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:15.566897   35047 retry.go:31] will retry after 311.925465ms: waiting for machine to come up
	I0717 22:04:15.880378   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:15.880811   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:15.880838   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:15.880763   35047 retry.go:31] will retry after 462.657544ms: waiting for machine to come up
	I0717 22:04:16.345371   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:16.345786   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:16.345817   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:16.345736   35047 retry.go:31] will retry after 646.921184ms: waiting for machine to come up
	I0717 22:04:16.994657   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:16.995090   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:16.995130   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:16.995051   35047 retry.go:31] will retry after 637.305224ms: waiting for machine to come up
	I0717 22:04:17.633858   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:17.634305   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:17.634326   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:17.634273   35047 retry.go:31] will retry after 870.210938ms: waiting for machine to come up
	I0717 22:04:18.505681   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:18.506057   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:18.506079   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:18.505994   35047 retry.go:31] will retry after 1.16511698s: waiting for machine to come up
	I0717 22:04:19.672699   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:19.673243   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:19.673265   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:19.673186   35047 retry.go:31] will retry after 1.360002174s: waiting for machine to come up
	I0717 22:04:21.034543   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:21.034994   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:21.035020   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:21.034961   35047 retry.go:31] will retry after 2.288400571s: waiting for machine to come up
	I0717 22:04:23.325709   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:23.326202   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:23.326228   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:23.326155   35047 retry.go:31] will retry after 2.634288613s: waiting for machine to come up
	I0717 22:04:25.962444   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:25.962933   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:25.962966   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:25.962874   35047 retry.go:31] will retry after 2.567006888s: waiting for machine to come up
	I0717 22:04:28.532575   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:28.533001   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:28.533031   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:28.532945   35047 retry.go:31] will retry after 2.895084636s: waiting for machine to come up
	I0717 22:04:31.431409   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:31.431897   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find current IP address of domain multinode-009530-m02 in network mk-multinode-009530
	I0717 22:04:31.431918   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | I0717 22:04:31.431840   35047 retry.go:31] will retry after 3.821436765s: waiting for machine to come up
	I0717 22:04:35.254528   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.255056   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has current primary IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.255084   34695 main.go:141] libmachine: (multinode-009530-m02) Found IP for machine: 192.168.39.146
	I0717 22:04:35.255094   34695 main.go:141] libmachine: (multinode-009530-m02) Reserving static IP address...
	I0717 22:04:35.255498   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | unable to find host DHCP lease matching {name: "multinode-009530-m02", mac: "52:54:00:2a:ac:62", ip: "192.168.39.146"} in network mk-multinode-009530
	I0717 22:04:35.330690   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Getting to WaitForSSH function...
	I0717 22:04:35.330718   34695 main.go:141] libmachine: (multinode-009530-m02) Reserved static IP address: 192.168.39.146
	I0717 22:04:35.330730   34695 main.go:141] libmachine: (multinode-009530-m02) Waiting for SSH to be available...
	I0717 22:04:35.334350   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.334813   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:35.334849   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.335011   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Using SSH client type: external
	I0717 22:04:35.335035   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/id_rsa (-rw-------)
	I0717 22:04:35.335066   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:04:35.335083   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | About to run SSH command:
	I0717 22:04:35.335099   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | exit 0
	I0717 22:04:35.425414   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | SSH cmd err, output: <nil>: 
	I0717 22:04:35.425727   34695 main.go:141] libmachine: (multinode-009530-m02) KVM machine creation complete!
	I0717 22:04:35.425967   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetConfigRaw
	I0717 22:04:35.426498   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:04:35.426690   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:04:35.426846   34695 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 22:04:35.426865   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetState
	I0717 22:04:35.428093   34695 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 22:04:35.428107   34695 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 22:04:35.428112   34695 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 22:04:35.428119   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:04:35.430404   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.430805   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:35.430829   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.430977   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:04:35.431148   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:35.431340   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:35.431505   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:04:35.431688   34695 main.go:141] libmachine: Using SSH client type: native
	I0717 22:04:35.432298   34695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0717 22:04:35.432316   34695 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 22:04:35.553101   34695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:04:35.553126   34695 main.go:141] libmachine: Detecting the provisioner...
	I0717 22:04:35.553134   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:04:35.555692   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.556029   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:35.556059   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.556250   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:04:35.556443   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:35.556636   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:35.556780   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:04:35.556962   34695 main.go:141] libmachine: Using SSH client type: native
	I0717 22:04:35.557332   34695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0717 22:04:35.557346   34695 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 22:04:35.678502   34695 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf5d52c7-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0717 22:04:35.678587   34695 main.go:141] libmachine: found compatible host: buildroot
	I0717 22:04:35.678603   34695 main.go:141] libmachine: Provisioning with buildroot...
	I0717 22:04:35.678629   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetMachineName
	I0717 22:04:35.678930   34695 buildroot.go:166] provisioning hostname "multinode-009530-m02"
	I0717 22:04:35.678957   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetMachineName
	I0717 22:04:35.679119   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:04:35.681950   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.682384   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:35.682417   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.682558   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:04:35.682766   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:35.682931   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:35.683055   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:04:35.683238   34695 main.go:141] libmachine: Using SSH client type: native
	I0717 22:04:35.683624   34695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0717 22:04:35.683647   34695 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-009530-m02 && echo "multinode-009530-m02" | sudo tee /etc/hostname
	I0717 22:04:35.820542   34695 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-009530-m02
	
	I0717 22:04:35.820570   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:04:35.823535   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.823962   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:35.823995   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.824153   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:04:35.824342   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:35.824549   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:35.824679   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:04:35.824900   34695 main.go:141] libmachine: Using SSH client type: native
	I0717 22:04:35.825300   34695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0717 22:04:35.825326   34695 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-009530-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-009530-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-009530-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:04:35.954502   34695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:04:35.954535   34695 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:04:35.954554   34695 buildroot.go:174] setting up certificates
	I0717 22:04:35.954561   34695 provision.go:83] configureAuth start
	I0717 22:04:35.954571   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetMachineName
	I0717 22:04:35.954878   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetIP
	I0717 22:04:35.957826   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.958123   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:35.958178   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.958334   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:04:35.960546   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.960896   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:35.960927   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:35.961090   34695 provision.go:138] copyHostCerts
	I0717 22:04:35.961119   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:04:35.961145   34695 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:04:35.961154   34695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:04:35.961217   34695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:04:35.961296   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:04:35.961317   34695 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:04:35.961322   34695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:04:35.961347   34695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:04:35.961401   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:04:35.961418   34695 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:04:35.961425   34695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:04:35.961452   34695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:04:35.961511   34695 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.multinode-009530-m02 san=[192.168.39.146 192.168.39.146 localhost 127.0.0.1 minikube multinode-009530-m02]
	I0717 22:04:36.136931   34695 provision.go:172] copyRemoteCerts
	I0717 22:04:36.136986   34695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:04:36.137009   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:04:36.140033   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.140371   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:36.140407   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.140633   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:04:36.140915   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:36.141091   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:04:36.141269   34695 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/id_rsa Username:docker}
	I0717 22:04:36.231023   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 22:04:36.231101   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:04:36.258650   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 22:04:36.258710   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:04:36.283276   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 22:04:36.283355   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0717 22:04:36.310891   34695 provision.go:86] duration metric: configureAuth took 356.316695ms
	I0717 22:04:36.310916   34695 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:04:36.311127   34695 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:04:36.311213   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:04:36.313872   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.314279   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:36.314316   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.314634   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:04:36.314816   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:36.314956   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:36.315105   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:04:36.315302   34695 main.go:141] libmachine: Using SSH client type: native
	I0717 22:04:36.315708   34695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0717 22:04:36.315732   34695 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:04:36.632301   34695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:04:36.632326   34695 main.go:141] libmachine: Checking connection to Docker...
	I0717 22:04:36.632335   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetURL
	I0717 22:04:36.633848   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | Using libvirt version 6000000
	I0717 22:04:36.636432   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.636753   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:36.636787   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.637053   34695 main.go:141] libmachine: Docker is up and running!
	I0717 22:04:36.637072   34695 main.go:141] libmachine: Reticulating splines...
	I0717 22:04:36.637079   34695 client.go:171] LocalClient.Create took 22.377758072s
	I0717 22:04:36.637115   34695 start.go:167] duration metric: libmachine.API.Create for "multinode-009530" took 22.377824229s
	I0717 22:04:36.637127   34695 start.go:300] post-start starting for "multinode-009530-m02" (driver="kvm2")
	I0717 22:04:36.637137   34695 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:04:36.637160   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:04:36.637386   34695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:04:36.637409   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:04:36.639703   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.640027   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:36.640055   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.640216   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:04:36.640389   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:36.640568   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:04:36.640676   34695 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/id_rsa Username:docker}
	I0717 22:04:36.731212   34695 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:04:36.735546   34695 command_runner.go:130] > NAME=Buildroot
	I0717 22:04:36.735563   34695 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0717 22:04:36.735568   34695 command_runner.go:130] > ID=buildroot
	I0717 22:04:36.735573   34695 command_runner.go:130] > VERSION_ID=2021.02.12
	I0717 22:04:36.735577   34695 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0717 22:04:36.735675   34695 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:04:36.735691   34695 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:04:36.735751   34695 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:04:36.735815   34695 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:04:36.735824   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> /etc/ssl/certs/229902.pem
	I0717 22:04:36.735917   34695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:04:36.744740   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:04:36.768621   34695 start.go:303] post-start completed in 131.481082ms
	I0717 22:04:36.768660   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetConfigRaw
	I0717 22:04:36.769208   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetIP
	I0717 22:04:36.772082   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.772482   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:36.772516   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.772779   34695 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/config.json ...
	I0717 22:04:36.773000   34695 start.go:128] duration metric: createHost completed in 22.531793983s
	I0717 22:04:36.773024   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:04:36.775583   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.775991   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:36.776023   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.776204   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:04:36.776422   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:36.776600   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:36.776759   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:04:36.776927   34695 main.go:141] libmachine: Using SSH client type: native
	I0717 22:04:36.777360   34695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0717 22:04:36.777373   34695 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:04:36.898542   34695 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689631476.876983814
	
	I0717 22:04:36.898560   34695 fix.go:206] guest clock: 1689631476.876983814
	I0717 22:04:36.898567   34695 fix.go:219] Guest: 2023-07-17 22:04:36.876983814 +0000 UTC Remote: 2023-07-17 22:04:36.773011728 +0000 UTC m=+88.934077084 (delta=103.972086ms)
	I0717 22:04:36.898580   34695 fix.go:190] guest clock delta is within tolerance: 103.972086ms
	I0717 22:04:36.898584   34695 start.go:83] releasing machines lock for "multinode-009530-m02", held for 22.657471267s
	I0717 22:04:36.898600   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:04:36.898921   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetIP
	I0717 22:04:36.901564   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.901948   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:36.901984   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.904576   34695 out.go:177] * Found network options:
	I0717 22:04:36.906608   34695 out.go:177]   - NO_PROXY=192.168.39.222
	W0717 22:04:36.908253   34695 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 22:04:36.908298   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:04:36.908897   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:04:36.909101   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:04:36.909221   34695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:04:36.909261   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	W0717 22:04:36.909292   34695 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 22:04:36.909368   34695 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:04:36.909392   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:04:36.911939   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.912054   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.912343   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:36.912372   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.912428   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:36.912459   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:36.912546   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:04:36.912663   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:04:36.912751   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:36.912822   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:04:36.912887   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:04:36.913041   34695 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/id_rsa Username:docker}
	I0717 22:04:36.913062   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:04:36.913209   34695 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/id_rsa Username:docker}
	I0717 22:04:37.154733   34695 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 22:04:37.154848   34695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:04:37.161036   34695 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 22:04:37.161173   34695 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:04:37.161264   34695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:04:37.175954   34695 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0717 22:04:37.176281   34695 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:04:37.176300   34695 start.go:466] detecting cgroup driver to use...
	I0717 22:04:37.176368   34695 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:04:37.190991   34695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:04:37.203634   34695 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:04:37.203705   34695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:04:37.216725   34695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:04:37.229475   34695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:04:37.334836   34695 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0717 22:04:37.334919   34695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:04:37.349025   34695 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0717 22:04:37.457642   34695 docker.go:212] disabling docker service ...
	I0717 22:04:37.457720   34695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:04:37.471107   34695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:04:37.482807   34695 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0717 22:04:37.482993   34695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:04:37.496726   34695 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0717 22:04:37.590212   34695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:04:37.709250   34695 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0717 22:04:37.709274   34695 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0717 22:04:37.709333   34695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:04:37.721826   34695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:04:37.739818   34695 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 22:04:37.740270   34695 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:04:37.740319   34695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:04:37.749772   34695 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:04:37.749850   34695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:04:37.759008   34695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:04:37.768002   34695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:04:37.777372   34695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:04:37.787278   34695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:04:37.795349   34695 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:04:37.795456   34695 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:04:37.795512   34695 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:04:37.807729   34695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:04:37.816939   34695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:04:37.928623   34695 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:04:38.107672   34695 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:04:38.107808   34695 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:04:38.112726   34695 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 22:04:38.112749   34695 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 22:04:38.112758   34695 command_runner.go:130] > Device: 16h/22d	Inode: 709         Links: 1
	I0717 22:04:38.112768   34695 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:04:38.112775   34695 command_runner.go:130] > Access: 2023-07-17 22:04:38.074462145 +0000
	I0717 22:04:38.112788   34695 command_runner.go:130] > Modify: 2023-07-17 22:04:38.074462145 +0000
	I0717 22:04:38.112796   34695 command_runner.go:130] > Change: 2023-07-17 22:04:38.074462145 +0000
	I0717 22:04:38.112806   34695 command_runner.go:130] >  Birth: -
	I0717 22:04:38.112925   34695 start.go:534] Will wait 60s for crictl version
	I0717 22:04:38.112986   34695 ssh_runner.go:195] Run: which crictl
	I0717 22:04:38.116618   34695 command_runner.go:130] > /usr/bin/crictl
	I0717 22:04:38.116928   34695 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:04:38.150600   34695 command_runner.go:130] > Version:  0.1.0
	I0717 22:04:38.150625   34695 command_runner.go:130] > RuntimeName:  cri-o
	I0717 22:04:38.150633   34695 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0717 22:04:38.150642   34695 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0717 22:04:38.152188   34695 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:04:38.152274   34695 ssh_runner.go:195] Run: crio --version
	I0717 22:04:38.202153   34695 command_runner.go:130] > crio version 1.24.1
	I0717 22:04:38.202181   34695 command_runner.go:130] > Version:          1.24.1
	I0717 22:04:38.202191   34695 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 22:04:38.202198   34695 command_runner.go:130] > GitTreeState:     dirty
	I0717 22:04:38.202208   34695 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 22:04:38.202215   34695 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 22:04:38.202222   34695 command_runner.go:130] > Compiler:         gc
	I0717 22:04:38.202229   34695 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:04:38.202236   34695 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:04:38.202247   34695 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:04:38.202254   34695 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:04:38.202261   34695 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:04:38.203882   34695 ssh_runner.go:195] Run: crio --version
	I0717 22:04:38.256979   34695 command_runner.go:130] > crio version 1.24.1
	I0717 22:04:38.257007   34695 command_runner.go:130] > Version:          1.24.1
	I0717 22:04:38.257019   34695 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 22:04:38.257027   34695 command_runner.go:130] > GitTreeState:     dirty
	I0717 22:04:38.257036   34695 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 22:04:38.257044   34695 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 22:04:38.257051   34695 command_runner.go:130] > Compiler:         gc
	I0717 22:04:38.257059   34695 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:04:38.257068   34695 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:04:38.257081   34695 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:04:38.257092   34695 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:04:38.257099   34695 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:04:38.260465   34695 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:04:38.262067   34695 out.go:177]   - env NO_PROXY=192.168.39.222
	I0717 22:04:38.263438   34695 main.go:141] libmachine: (multinode-009530-m02) Calling .GetIP
	I0717 22:04:38.265955   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:38.266227   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:04:38.266253   34695 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:04:38.266450   34695 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 22:04:38.270761   34695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:04:38.283746   34695 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530 for IP: 192.168.39.146
	I0717 22:04:38.283776   34695 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:04:38.283926   34695 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:04:38.283988   34695 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:04:38.284004   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 22:04:38.284023   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 22:04:38.284037   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 22:04:38.284054   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 22:04:38.284117   34695 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:04:38.284153   34695 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:04:38.284169   34695 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:04:38.284204   34695 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:04:38.284235   34695 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:04:38.284282   34695 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:04:38.284350   34695 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:04:38.284387   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> /usr/share/ca-certificates/229902.pem
	I0717 22:04:38.284408   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:04:38.284425   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem -> /usr/share/ca-certificates/22990.pem
	I0717 22:04:38.284828   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:04:38.309695   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:04:38.333969   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:04:38.358272   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:04:38.382361   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:04:38.407985   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:04:38.432472   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:04:38.458601   34695 ssh_runner.go:195] Run: openssl version
	I0717 22:04:38.464399   34695 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0717 22:04:38.464475   34695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:04:38.474440   34695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:04:38.479189   34695 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:04:38.479217   34695 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:04:38.479261   34695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:04:38.484738   34695 command_runner.go:130] > 51391683
	I0717 22:04:38.484796   34695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:04:38.495053   34695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:04:38.504775   34695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:04:38.510022   34695 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:04:38.510052   34695 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:04:38.510106   34695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:04:38.515607   34695 command_runner.go:130] > 3ec20f2e
	I0717 22:04:38.515723   34695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:04:38.525714   34695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:04:38.535600   34695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:04:38.540753   34695 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:04:38.540784   34695 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:04:38.540838   34695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:04:38.546418   34695 command_runner.go:130] > b5213941
	I0717 22:04:38.546670   34695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:04:38.556748   34695 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:04:38.560794   34695 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:04:38.561004   34695 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:04:38.561089   34695 ssh_runner.go:195] Run: crio config
	I0717 22:04:38.613443   34695 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 22:04:38.613479   34695 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 22:04:38.613490   34695 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 22:04:38.613496   34695 command_runner.go:130] > #
	I0717 22:04:38.613508   34695 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 22:04:38.613528   34695 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 22:04:38.613538   34695 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 22:04:38.613556   34695 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 22:04:38.613569   34695 command_runner.go:130] > # reload'.
	I0717 22:04:38.613579   34695 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 22:04:38.613590   34695 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 22:04:38.613602   34695 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 22:04:38.613614   34695 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 22:04:38.613629   34695 command_runner.go:130] > [crio]
	I0717 22:04:38.613639   34695 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 22:04:38.613647   34695 command_runner.go:130] > # containers images, in this directory.
	I0717 22:04:38.613686   34695 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 22:04:38.613704   34695 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 22:04:38.613729   34695 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 22:04:38.613744   34695 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 22:04:38.613760   34695 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 22:04:38.613949   34695 command_runner.go:130] > storage_driver = "overlay"
	I0717 22:04:38.613967   34695 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 22:04:38.613978   34695 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 22:04:38.613985   34695 command_runner.go:130] > storage_option = [
	I0717 22:04:38.614538   34695 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 22:04:38.614549   34695 command_runner.go:130] > ]
	I0717 22:04:38.614560   34695 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 22:04:38.614570   34695 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 22:04:38.614578   34695 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 22:04:38.614591   34695 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 22:04:38.614619   34695 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 22:04:38.614624   34695 command_runner.go:130] > # always happen on a node reboot
	I0717 22:04:38.614629   34695 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 22:04:38.614637   34695 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 22:04:38.614643   34695 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 22:04:38.614654   34695 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 22:04:38.614663   34695 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 22:04:38.614679   34695 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 22:04:38.614697   34695 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 22:04:38.614708   34695 command_runner.go:130] > # internal_wipe = true
	I0717 22:04:38.614716   34695 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 22:04:38.614724   34695 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 22:04:38.614731   34695 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 22:04:38.614736   34695 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 22:04:38.614745   34695 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 22:04:38.614749   34695 command_runner.go:130] > [crio.api]
	I0717 22:04:38.614758   34695 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 22:04:38.614769   34695 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 22:04:38.614782   34695 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 22:04:38.614793   34695 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 22:04:38.614807   34695 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 22:04:38.614818   34695 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 22:04:38.614825   34695 command_runner.go:130] > # stream_port = "0"
	I0717 22:04:38.614831   34695 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 22:04:38.614837   34695 command_runner.go:130] > # stream_enable_tls = false
	I0717 22:04:38.614847   34695 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 22:04:38.614857   34695 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 22:04:38.614866   34695 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 22:04:38.614879   34695 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 22:04:38.614885   34695 command_runner.go:130] > # minutes.
	I0717 22:04:38.614896   34695 command_runner.go:130] > # stream_tls_cert = ""
	I0717 22:04:38.614909   34695 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 22:04:38.614923   34695 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 22:04:38.614933   34695 command_runner.go:130] > # stream_tls_key = ""
	I0717 22:04:38.614948   34695 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 22:04:38.614962   34695 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 22:04:38.614974   34695 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 22:04:38.614982   34695 command_runner.go:130] > # stream_tls_ca = ""
	I0717 22:04:38.614996   34695 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:04:38.615007   34695 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 22:04:38.615023   34695 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:04:38.615034   34695 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 22:04:38.615099   34695 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 22:04:38.615114   34695 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 22:04:38.615121   34695 command_runner.go:130] > [crio.runtime]
	I0717 22:04:38.615131   34695 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 22:04:38.615144   34695 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 22:04:38.615153   34695 command_runner.go:130] > # "nofile=1024:2048"
	I0717 22:04:38.615163   34695 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 22:04:38.615172   34695 command_runner.go:130] > # default_ulimits = [
	I0717 22:04:38.615178   34695 command_runner.go:130] > # ]
	I0717 22:04:38.615196   34695 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 22:04:38.615203   34695 command_runner.go:130] > # no_pivot = false
	I0717 22:04:38.615212   34695 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 22:04:38.615223   34695 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 22:04:38.615235   34695 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 22:04:38.615245   34695 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 22:04:38.615253   34695 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 22:04:38.615268   34695 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:04:38.615279   34695 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 22:04:38.615290   34695 command_runner.go:130] > # Cgroup setting for conmon
	I0717 22:04:38.615305   34695 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 22:04:38.615314   34695 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 22:04:38.615325   34695 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 22:04:38.615336   34695 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 22:04:38.615351   34695 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:04:38.615361   34695 command_runner.go:130] > conmon_env = [
	I0717 22:04:38.615371   34695 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 22:04:38.615381   34695 command_runner.go:130] > ]
	I0717 22:04:38.615390   34695 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 22:04:38.615399   34695 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 22:04:38.615408   34695 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 22:04:38.615418   34695 command_runner.go:130] > # default_env = [
	I0717 22:04:38.615426   34695 command_runner.go:130] > # ]
	I0717 22:04:38.615435   34695 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 22:04:38.615445   34695 command_runner.go:130] > # selinux = false
	I0717 22:04:38.615455   34695 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 22:04:38.615467   34695 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 22:04:38.615478   34695 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 22:04:38.615489   34695 command_runner.go:130] > # seccomp_profile = ""
	I0717 22:04:38.615498   34695 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 22:04:38.615511   34695 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 22:04:38.615525   34695 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 22:04:38.615536   34695 command_runner.go:130] > # which might increase security.
	I0717 22:04:38.615548   34695 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 22:04:38.615562   34695 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 22:04:38.615573   34695 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 22:04:38.615586   34695 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 22:04:38.615607   34695 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 22:04:38.615619   34695 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:04:38.615630   34695 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 22:04:38.615640   34695 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 22:04:38.615649   34695 command_runner.go:130] > # the cgroup blockio controller.
	I0717 22:04:38.615657   34695 command_runner.go:130] > # blockio_config_file = ""
	I0717 22:04:38.615668   34695 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 22:04:38.615678   34695 command_runner.go:130] > # irqbalance daemon.
	I0717 22:04:38.615688   34695 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 22:04:38.615702   34695 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 22:04:38.615714   34695 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:04:38.615748   34695 command_runner.go:130] > # rdt_config_file = ""
	I0717 22:04:38.615761   34695 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 22:04:38.615769   34695 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 22:04:38.615780   34695 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 22:04:38.615876   34695 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 22:04:38.615892   34695 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 22:04:38.615898   34695 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 22:04:38.615902   34695 command_runner.go:130] > # will be added.
	I0717 22:04:38.616188   34695 command_runner.go:130] > # default_capabilities = [
	I0717 22:04:38.616476   34695 command_runner.go:130] > # 	"CHOWN",
	I0717 22:04:38.616491   34695 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 22:04:38.616498   34695 command_runner.go:130] > # 	"FSETID",
	I0717 22:04:38.616504   34695 command_runner.go:130] > # 	"FOWNER",
	I0717 22:04:38.616510   34695 command_runner.go:130] > # 	"SETGID",
	I0717 22:04:38.616520   34695 command_runner.go:130] > # 	"SETUID",
	I0717 22:04:38.616544   34695 command_runner.go:130] > # 	"SETPCAP",
	I0717 22:04:38.617081   34695 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 22:04:38.617176   34695 command_runner.go:130] > # 	"KILL",
	I0717 22:04:38.617453   34695 command_runner.go:130] > # ]
	I0717 22:04:38.617472   34695 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 22:04:38.617492   34695 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:04:38.617502   34695 command_runner.go:130] > # default_sysctls = [
	I0717 22:04:38.617509   34695 command_runner.go:130] > # ]
	I0717 22:04:38.617535   34695 command_runner.go:130] > # List of devices on the host that a
	I0717 22:04:38.617550   34695 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 22:04:38.617561   34695 command_runner.go:130] > # allowed_devices = [
	I0717 22:04:38.617591   34695 command_runner.go:130] > # 	"/dev/fuse",
	I0717 22:04:38.617602   34695 command_runner.go:130] > # ]
	I0717 22:04:38.617610   34695 command_runner.go:130] > # List of additional devices. specified as
	I0717 22:04:38.617623   34695 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 22:04:38.617635   34695 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 22:04:38.617660   34695 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:04:38.617671   34695 command_runner.go:130] > # additional_devices = [
	I0717 22:04:38.617677   34695 command_runner.go:130] > # ]
	I0717 22:04:38.617688   34695 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 22:04:38.617695   34695 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 22:04:38.617713   34695 command_runner.go:130] > # 	"/etc/cdi",
	I0717 22:04:38.617723   34695 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 22:04:38.617729   34695 command_runner.go:130] > # ]
	I0717 22:04:38.617743   34695 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 22:04:38.617756   34695 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 22:04:38.617764   34695 command_runner.go:130] > # Defaults to false.
	I0717 22:04:38.617798   34695 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 22:04:38.617811   34695 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 22:04:38.617822   34695 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 22:04:38.617832   34695 command_runner.go:130] > # hooks_dir = [
	I0717 22:04:38.617844   34695 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 22:04:38.617850   34695 command_runner.go:130] > # ]
	I0717 22:04:38.617865   34695 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 22:04:38.617874   34695 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 22:04:38.617885   34695 command_runner.go:130] > # its default mounts from the following two files:
	I0717 22:04:38.617894   34695 command_runner.go:130] > #
	I0717 22:04:38.617905   34695 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 22:04:38.617917   34695 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 22:04:38.617929   34695 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 22:04:38.617938   34695 command_runner.go:130] > #
	I0717 22:04:38.617949   34695 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 22:04:38.617963   34695 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 22:04:38.617977   34695 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 22:04:38.617988   34695 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 22:04:38.617993   34695 command_runner.go:130] > #
	I0717 22:04:38.618004   34695 command_runner.go:130] > # default_mounts_file = ""
	I0717 22:04:38.618017   34695 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 22:04:38.618033   34695 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 22:04:38.618063   34695 command_runner.go:130] > pids_limit = 1024
	I0717 22:04:38.618077   34695 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 22:04:38.618091   34695 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 22:04:38.618105   34695 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 22:04:38.618123   34695 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 22:04:38.618130   34695 command_runner.go:130] > # log_size_max = -1
	I0717 22:04:38.618145   34695 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 22:04:38.618155   34695 command_runner.go:130] > # log_to_journald = false
	I0717 22:04:38.618167   34695 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 22:04:38.618179   34695 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 22:04:38.618191   34695 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 22:04:38.618200   34695 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 22:04:38.618212   34695 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 22:04:38.618223   34695 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 22:04:38.618236   34695 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 22:04:38.618246   34695 command_runner.go:130] > # read_only = false
	I0717 22:04:38.618260   34695 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 22:04:38.618275   34695 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 22:04:38.618285   34695 command_runner.go:130] > # live configuration reload.
	I0717 22:04:38.618293   34695 command_runner.go:130] > # log_level = "info"
	I0717 22:04:38.618306   34695 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 22:04:38.618318   34695 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:04:38.618328   34695 command_runner.go:130] > # log_filter = ""
	I0717 22:04:38.618340   34695 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 22:04:38.618354   34695 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 22:04:38.618364   34695 command_runner.go:130] > # separated by comma.
	I0717 22:04:38.618373   34695 command_runner.go:130] > # uid_mappings = ""
	I0717 22:04:38.618384   34695 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 22:04:38.618398   34695 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 22:04:38.618408   34695 command_runner.go:130] > # separated by comma.
	I0717 22:04:38.618419   34695 command_runner.go:130] > # gid_mappings = ""
	I0717 22:04:38.618433   34695 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 22:04:38.618447   34695 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:04:38.618461   34695 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:04:38.618472   34695 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 22:04:38.618483   34695 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 22:04:38.618498   34695 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:04:38.618512   34695 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:04:38.618527   34695 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 22:04:38.618542   34695 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 22:04:38.618556   34695 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 22:04:38.618565   34695 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 22:04:38.618573   34695 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 22:04:38.618586   34695 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 22:04:38.618598   34695 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 22:04:38.618606   34695 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 22:04:38.618614   34695 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 22:04:38.618620   34695 command_runner.go:130] > drop_infra_ctr = false
	I0717 22:04:38.618626   34695 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 22:04:38.618634   34695 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 22:04:38.618641   34695 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 22:04:38.618647   34695 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 22:04:38.618653   34695 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 22:04:38.618660   34695 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 22:04:38.618679   34695 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 22:04:38.618694   34695 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 22:04:38.618706   34695 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 22:04:38.618718   34695 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 22:04:38.618733   34695 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 22:04:38.618747   34695 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 22:04:38.618758   34695 command_runner.go:130] > # default_runtime = "runc"
	I0717 22:04:38.618768   34695 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 22:04:38.618785   34695 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 22:04:38.618800   34695 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 22:04:38.618812   34695 command_runner.go:130] > # creation as a file is not desired either.
	I0717 22:04:38.618828   34695 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 22:04:38.618840   34695 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 22:04:38.618858   34695 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 22:04:38.618865   34695 command_runner.go:130] > # ]
	I0717 22:04:38.618879   34695 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 22:04:38.618893   34695 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 22:04:38.618908   34695 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 22:04:38.618923   34695 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 22:04:38.618932   34695 command_runner.go:130] > #
	I0717 22:04:38.618941   34695 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 22:04:38.618953   34695 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 22:04:38.618961   34695 command_runner.go:130] > #  runtime_type = "oci"
	I0717 22:04:38.618971   34695 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 22:04:38.618982   34695 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 22:04:38.618992   34695 command_runner.go:130] > #  allowed_annotations = []
	I0717 22:04:38.618998   34695 command_runner.go:130] > # Where:
	I0717 22:04:38.619010   34695 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 22:04:38.619019   34695 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 22:04:38.619029   34695 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 22:04:38.619042   34695 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 22:04:38.619052   34695 command_runner.go:130] > #   in $PATH.
	I0717 22:04:38.619061   34695 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 22:04:38.619072   34695 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 22:04:38.619086   34695 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 22:04:38.619093   34695 command_runner.go:130] > #   state.
	I0717 22:04:38.619108   34695 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 22:04:38.619122   34695 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 22:04:38.619136   34695 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 22:04:38.619149   34695 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 22:04:38.619161   34695 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 22:04:38.619176   34695 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 22:04:38.619188   34695 command_runner.go:130] > #   The currently recognized values are:
	I0717 22:04:38.619202   34695 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 22:04:38.619217   34695 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 22:04:38.619231   34695 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 22:04:38.619246   34695 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 22:04:38.619263   34695 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 22:04:38.619277   34695 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 22:04:38.619291   34695 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 22:04:38.619306   34695 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 22:04:38.619320   34695 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 22:04:38.619330   34695 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 22:04:38.619339   34695 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 22:04:38.619349   34695 command_runner.go:130] > runtime_type = "oci"
	I0717 22:04:38.619360   34695 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 22:04:38.619370   34695 command_runner.go:130] > runtime_config_path = ""
	I0717 22:04:38.619381   34695 command_runner.go:130] > monitor_path = ""
	I0717 22:04:38.619390   34695 command_runner.go:130] > monitor_cgroup = ""
	I0717 22:04:38.619400   34695 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 22:04:38.619412   34695 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 22:04:38.619421   34695 command_runner.go:130] > # running containers
	I0717 22:04:38.619430   34695 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 22:04:38.619444   34695 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 22:04:38.619475   34695 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 22:04:38.619489   34695 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 22:04:38.619501   34695 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 22:04:38.619511   34695 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 22:04:38.619520   34695 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 22:04:38.619531   34695 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 22:04:38.619543   34695 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 22:04:38.619554   34695 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 22:04:38.619593   34695 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 22:04:38.619605   34695 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 22:04:38.619618   34695 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 22:04:38.619635   34695 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 22:04:38.619652   34695 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 22:04:38.619665   34695 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 22:04:38.619686   34695 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 22:04:38.619711   34695 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 22:04:38.619725   34695 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 22:04:38.619741   34695 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 22:04:38.619751   34695 command_runner.go:130] > # Example:
	I0717 22:04:38.619765   34695 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 22:04:38.619777   34695 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 22:04:38.619789   34695 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 22:04:38.619799   34695 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 22:04:38.619808   34695 command_runner.go:130] > # cpuset = 0
	I0717 22:04:38.619820   34695 command_runner.go:130] > # cpushares = "0-1"
	I0717 22:04:38.619829   34695 command_runner.go:130] > # Where:
	I0717 22:04:38.619839   34695 command_runner.go:130] > # The workload name is workload-type.
	I0717 22:04:38.619855   34695 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 22:04:38.619869   34695 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 22:04:38.619883   34695 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 22:04:38.619898   34695 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 22:04:38.619913   34695 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 22:04:38.619921   34695 command_runner.go:130] > # 
	I0717 22:04:38.619933   34695 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 22:04:38.619942   34695 command_runner.go:130] > #
	I0717 22:04:38.619953   34695 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 22:04:38.619967   34695 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 22:04:38.619981   34695 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 22:04:38.619996   34695 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 22:04:38.620009   34695 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 22:04:38.620019   34695 command_runner.go:130] > [crio.image]
	I0717 22:04:38.620033   34695 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 22:04:38.620044   34695 command_runner.go:130] > # default_transport = "docker://"
	I0717 22:04:38.620059   34695 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 22:04:38.620074   34695 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:04:38.620084   34695 command_runner.go:130] > # global_auth_file = ""
	I0717 22:04:38.620096   34695 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 22:04:38.620109   34695 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:04:38.620121   34695 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 22:04:38.620136   34695 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 22:04:38.620147   34695 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:04:38.620159   34695 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:04:38.620170   34695 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 22:04:38.620184   34695 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 22:04:38.620197   34695 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 22:04:38.620211   34695 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 22:04:38.620225   34695 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 22:04:38.620234   34695 command_runner.go:130] > # pause_command = "/pause"
	I0717 22:04:38.620248   34695 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 22:04:38.620262   34695 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 22:04:38.620277   34695 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 22:04:38.620291   34695 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 22:04:38.620304   34695 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 22:04:38.620313   34695 command_runner.go:130] > # signature_policy = ""
	I0717 22:04:38.620324   34695 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 22:04:38.620339   34695 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 22:04:38.620350   34695 command_runner.go:130] > # changing them here.
	I0717 22:04:38.620360   34695 command_runner.go:130] > # insecure_registries = [
	I0717 22:04:38.620369   34695 command_runner.go:130] > # ]
	I0717 22:04:38.620381   34695 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 22:04:38.620393   34695 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 22:04:38.620403   34695 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 22:04:38.620413   34695 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 22:04:38.620424   34695 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 22:04:38.620438   34695 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 22:04:38.620448   34695 command_runner.go:130] > # CNI plugins.
	I0717 22:04:38.620457   34695 command_runner.go:130] > [crio.network]
	I0717 22:04:38.620468   34695 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 22:04:38.620499   34695 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 22:04:38.620509   34695 command_runner.go:130] > # cni_default_network = ""
	I0717 22:04:38.620520   34695 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 22:04:38.620531   34695 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 22:04:38.620545   34695 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 22:04:38.620555   34695 command_runner.go:130] > # plugin_dirs = [
	I0717 22:04:38.620565   34695 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 22:04:38.620573   34695 command_runner.go:130] > # ]
	I0717 22:04:38.620586   34695 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 22:04:38.620596   34695 command_runner.go:130] > [crio.metrics]
	I0717 22:04:38.620608   34695 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 22:04:38.620619   34695 command_runner.go:130] > enable_metrics = true
	I0717 22:04:38.620631   34695 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 22:04:38.620640   34695 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 22:04:38.620654   34695 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 22:04:38.620668   34695 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 22:04:38.620682   34695 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 22:04:38.620692   34695 command_runner.go:130] > # metrics_collectors = [
	I0717 22:04:38.620706   34695 command_runner.go:130] > # 	"operations",
	I0717 22:04:38.620718   34695 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 22:04:38.620729   34695 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 22:04:38.620739   34695 command_runner.go:130] > # 	"operations_errors",
	I0717 22:04:38.620751   34695 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 22:04:38.620764   34695 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 22:04:38.620775   34695 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 22:04:38.620786   34695 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 22:04:38.620796   34695 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 22:04:38.620804   34695 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 22:04:38.620814   34695 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 22:04:38.620825   34695 command_runner.go:130] > # 	"containers_oom_total",
	I0717 22:04:38.620833   34695 command_runner.go:130] > # 	"containers_oom",
	I0717 22:04:38.620844   34695 command_runner.go:130] > # 	"processes_defunct",
	I0717 22:04:38.620854   34695 command_runner.go:130] > # 	"operations_total",
	I0717 22:04:38.620864   34695 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 22:04:38.620875   34695 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 22:04:38.620890   34695 command_runner.go:130] > # 	"operations_errors_total",
	I0717 22:04:38.620898   34695 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 22:04:38.620910   34695 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 22:04:38.620918   34695 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 22:04:38.620925   34695 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 22:04:38.620937   34695 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 22:04:38.620948   34695 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 22:04:38.620956   34695 command_runner.go:130] > # ]
	I0717 22:04:38.620965   34695 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 22:04:38.620974   34695 command_runner.go:130] > # metrics_port = 9090
	I0717 22:04:38.620983   34695 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 22:04:38.620992   34695 command_runner.go:130] > # metrics_socket = ""
	I0717 22:04:38.620998   34695 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 22:04:38.621007   34695 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 22:04:38.621013   34695 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 22:04:38.621021   34695 command_runner.go:130] > # certificate on any modification event.
	I0717 22:04:38.621025   34695 command_runner.go:130] > # metrics_cert = ""
	I0717 22:04:38.621030   34695 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 22:04:38.621035   34695 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 22:04:38.621042   34695 command_runner.go:130] > # metrics_key = ""
	I0717 22:04:38.621049   34695 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 22:04:38.621055   34695 command_runner.go:130] > [crio.tracing]
	I0717 22:04:38.621060   34695 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 22:04:38.621066   34695 command_runner.go:130] > # enable_tracing = false
	I0717 22:04:38.621072   34695 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 22:04:38.621078   34695 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 22:04:38.621083   34695 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 22:04:38.621088   34695 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 22:04:38.621096   34695 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 22:04:38.621101   34695 command_runner.go:130] > [crio.stats]
	I0717 22:04:38.621109   34695 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 22:04:38.621115   34695 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 22:04:38.621121   34695 command_runner.go:130] > # stats_collection_period = 0
	I0717 22:04:38.621166   34695 command_runner.go:130] ! time="2023-07-17 22:04:38.592286312Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0717 22:04:38.621179   34695 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 22:04:38.621256   34695 cni.go:84] Creating CNI manager for ""
	I0717 22:04:38.621266   34695 cni.go:137] 2 nodes found, recommending kindnet
	I0717 22:04:38.621273   34695 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:04:38.621289   34695 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.146 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-009530 NodeName:multinode-009530-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:04:38.621383   34695 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-009530-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:04:38.621428   34695 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-009530-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-009530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:04:38.621474   34695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:04:38.630443   34695 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.27.3': No such file or directory
	I0717 22:04:38.630486   34695 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.27.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.27.3': No such file or directory
	
	Initiating transfer...
	I0717 22:04:38.630538   34695 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.27.3
	I0717 22:04:38.639297   34695 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl.sha256
	I0717 22:04:38.639324   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/linux/amd64/v1.27.3/kubectl -> /var/lib/minikube/binaries/v1.27.3/kubectl
	I0717 22:04:38.639323   34695 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/linux/amd64/v1.27.3/kubelet
	I0717 22:04:38.639403   34695 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubectl
	I0717 22:04:38.639323   34695 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/linux/amd64/v1.27.3/kubeadm
	I0717 22:04:38.643812   34695 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubectl': No such file or directory
	I0717 22:04:38.643905   34695 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubectl': No such file or directory
	I0717 22:04:38.643935   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/linux/amd64/v1.27.3/kubectl --> /var/lib/minikube/binaries/v1.27.3/kubectl (49258496 bytes)
	I0717 22:04:39.374870   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/linux/amd64/v1.27.3/kubeadm -> /var/lib/minikube/binaries/v1.27.3/kubeadm
	I0717 22:04:39.374949   34695 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubeadm
	I0717 22:04:39.379910   34695 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubeadm': No such file or directory
	I0717 22:04:39.379957   34695 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubeadm': No such file or directory
	I0717 22:04:39.379981   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/linux/amd64/v1.27.3/kubeadm --> /var/lib/minikube/binaries/v1.27.3/kubeadm (48160768 bytes)
	I0717 22:04:40.066141   34695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:04:40.081115   34695 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/linux/amd64/v1.27.3/kubelet -> /var/lib/minikube/binaries/v1.27.3/kubelet
	I0717 22:04:40.081231   34695 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubelet
	I0717 22:04:40.086095   34695 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubelet': No such file or directory
	I0717 22:04:40.086260   34695 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubelet': No such file or directory
	I0717 22:04:40.086299   34695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/linux/amd64/v1.27.3/kubelet --> /var/lib/minikube/binaries/v1.27.3/kubelet (106160128 bytes)
	I0717 22:04:40.587628   34695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 22:04:40.597427   34695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0717 22:04:40.615159   34695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:04:40.631303   34695 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0717 22:04:40.635125   34695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:04:40.646446   34695 host.go:66] Checking if "multinode-009530" exists ...
	I0717 22:04:40.646701   34695 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:04:40.646891   34695 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:04:40.646945   34695 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:04:40.662339   34695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43445
	I0717 22:04:40.662728   34695 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:04:40.663193   34695 main.go:141] libmachine: Using API Version  1
	I0717 22:04:40.663217   34695 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:04:40.663551   34695 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:04:40.663777   34695 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:04:40.663931   34695 start.go:301] JoinCluster: &{Name:multinode-009530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-009530 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.146 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:04:40.664016   34695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 22:04:40.664035   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:04:40.666883   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:04:40.667352   34695 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:04:40.667391   34695 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:04:40.667529   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:04:40.667718   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:04:40.667888   34695 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:04:40.668032   34695 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:04:40.829777   34695 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token phd8lh.avl14if53z6qqscd --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:04:40.836260   34695 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.146 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 22:04:40.836302   34695 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token phd8lh.avl14if53z6qqscd --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-009530-m02"
	I0717 22:04:40.879401   34695 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 22:04:41.008284   34695 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0717 22:04:41.008319   34695 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0717 22:04:41.048305   34695 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:04:41.048348   34695 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:04:41.048356   34695 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 22:04:41.176077   34695 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0717 22:04:43.694656   34695 command_runner.go:130] > This node has joined the cluster:
	I0717 22:04:43.694686   34695 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0717 22:04:43.694696   34695 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0717 22:04:43.694705   34695 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0717 22:04:43.697040   34695 command_runner.go:130] ! W0717 22:04:40.866041     825 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0717 22:04:43.697071   34695 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:04:43.697093   34695 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token phd8lh.avl14if53z6qqscd --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-009530-m02": (2.860777817s)
	I0717 22:04:43.697114   34695 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 22:04:43.984173   34695 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0717 22:04:43.984206   34695 start.go:303] JoinCluster complete in 3.320276304s
	I0717 22:04:43.984216   34695 cni.go:84] Creating CNI manager for ""
	I0717 22:04:43.984222   34695 cni.go:137] 2 nodes found, recommending kindnet
	I0717 22:04:43.984282   34695 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 22:04:43.990820   34695 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 22:04:43.990850   34695 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0717 22:04:43.990860   34695 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0717 22:04:43.990870   34695 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:04:43.990879   34695 command_runner.go:130] > Access: 2023-07-17 22:03:20.325572299 +0000
	I0717 22:04:43.990886   34695 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0717 22:04:43.990893   34695 command_runner.go:130] > Change: 2023-07-17 22:03:18.497572299 +0000
	I0717 22:04:43.990900   34695 command_runner.go:130] >  Birth: -
	I0717 22:04:43.991235   34695 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 22:04:43.991253   34695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 22:04:44.011361   34695 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 22:04:44.381178   34695 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 22:04:44.388670   34695 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 22:04:44.392502   34695 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 22:04:44.406476   34695 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 22:04:44.409845   34695 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:04:44.410043   34695 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:04:44.410302   34695 round_trippers.go:463] GET https://192.168.39.222:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 22:04:44.410313   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:44.410320   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:44.410328   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:44.412408   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:44.412431   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:44.412441   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:44.412449   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:44.412458   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:44.412466   34695 round_trippers.go:580]     Content-Length: 291
	I0717 22:04:44.412476   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:44 GMT
	I0717 22:04:44.412485   34695 round_trippers.go:580]     Audit-Id: 59c05c6e-d61a-4f1f-bc61-18a5f8d1f00b
	I0717 22:04:44.412497   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:44.412530   34695 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c60c6831-559f-4b19-8b15-656b8972a35c","resourceVersion":"450","creationTimestamp":"2023-07-17T22:03:52Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0717 22:04:44.412626   34695 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-009530" context rescaled to 1 replicas
	I0717 22:04:44.412658   34695 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.146 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 22:04:44.568936   34695 out.go:177] * Verifying Kubernetes components...
	I0717 22:04:44.663735   34695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:04:44.680963   34695 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:04:44.681265   34695 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:04:44.681492   34695 node_ready.go:35] waiting up to 6m0s for node "multinode-009530-m02" to be "Ready" ...
	I0717 22:04:44.681614   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:44.681626   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:44.681633   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:44.681639   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:44.691514   34695 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 22:04:44.691541   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:44.691551   34695 round_trippers.go:580]     Audit-Id: 429944b9-27d8-405e-b6ec-f5f49a2fe511
	I0717 22:04:44.691561   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:44.691569   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:44.691577   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:44.691587   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:44.691600   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:44.691610   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:44 GMT
	I0717 22:04:44.691743   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:45.192795   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:45.192815   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:45.192823   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:45.192829   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:45.196389   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:45.196413   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:45.196423   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:45.196431   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:45.196440   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:45.196448   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:45 GMT
	I0717 22:04:45.196462   34695 round_trippers.go:580]     Audit-Id: 5fa6b586-f4e7-44bd-b19c-bb70c614f753
	I0717 22:04:45.196473   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:45.196486   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:45.196575   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:45.692669   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:45.692693   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:45.692701   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:45.692707   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:45.695883   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:45.695918   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:45.695928   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:45.695936   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:45.695948   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:45.695957   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:45 GMT
	I0717 22:04:45.695968   34695 round_trippers.go:580]     Audit-Id: 7e566c56-7771-4bd8-bb06-c1ccc3325b1d
	I0717 22:04:45.695980   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:45.695991   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:45.696083   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:46.192562   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:46.192584   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:46.192592   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:46.192598   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:46.196073   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:46.196106   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:46.196118   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:46.196127   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:46.196136   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:46 GMT
	I0717 22:04:46.196144   34695 round_trippers.go:580]     Audit-Id: dfe93315-2be0-4cab-a6b6-88c4da4c4883
	I0717 22:04:46.196157   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:46.196169   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:46.196181   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:46.196290   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:46.692816   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:46.692840   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:46.692848   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:46.692855   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:46.696837   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:46.696866   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:46.696876   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:46.696885   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:46 GMT
	I0717 22:04:46.696894   34695 round_trippers.go:580]     Audit-Id: 794a335d-0383-4920-96ec-1dad9a1d4d05
	I0717 22:04:46.696902   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:46.696911   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:46.696939   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:46.696948   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:46.697481   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:46.697851   34695 node_ready.go:58] node "multinode-009530-m02" has status "Ready":"False"
	I0717 22:04:47.193160   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:47.193183   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:47.193196   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:47.193207   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:47.196026   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:47.196050   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:47.196061   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:47.196070   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:47.196079   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:47 GMT
	I0717 22:04:47.196088   34695 round_trippers.go:580]     Audit-Id: 998712f4-b5fd-4687-b077-fb8e0d2ae18d
	I0717 22:04:47.196104   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:47.196113   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:47.196125   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:47.196219   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:47.692803   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:47.692828   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:47.692835   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:47.692841   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:47.697626   34695 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:04:47.697648   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:47.697655   34695 round_trippers.go:580]     Audit-Id: 61751d49-132c-4bb6-a04e-49b5fe3213ad
	I0717 22:04:47.697661   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:47.697666   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:47.697672   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:47.697679   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:47.697688   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:47.697699   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:47 GMT
	I0717 22:04:47.698174   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:48.192819   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:48.192843   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:48.192851   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:48.192857   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:48.195904   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:48.195931   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:48.195939   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:48 GMT
	I0717 22:04:48.195945   34695 round_trippers.go:580]     Audit-Id: 757d4089-053e-45ff-9329-5bafab4b7756
	I0717 22:04:48.195950   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:48.195959   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:48.195971   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:48.195982   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:48.195995   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:48.196077   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:48.692571   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:48.692595   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:48.692606   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:48.692620   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:48.696250   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:48.696276   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:48.696286   34695 round_trippers.go:580]     Audit-Id: 03b9c3ed-bf39-42b5-8d3b-9295b5b1f682
	I0717 22:04:48.696295   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:48.696305   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:48.696313   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:48.696326   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:48.696335   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:48.696346   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:48 GMT
	I0717 22:04:48.696434   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:49.192975   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:49.193003   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:49.193015   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:49.193024   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:49.196185   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:49.196212   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:49.196223   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:49.196233   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:49.196243   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:49.196251   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:49.196259   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:49.196266   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:49 GMT
	I0717 22:04:49.196281   34695 round_trippers.go:580]     Audit-Id: c8db0f26-7a62-47cf-86a5-c1bda63cc7db
	I0717 22:04:49.196376   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:49.196676   34695 node_ready.go:58] node "multinode-009530-m02" has status "Ready":"False"
	I0717 22:04:49.692467   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:49.692490   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:49.692498   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:49.692504   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:49.695429   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:49.695447   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:49.695459   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:49 GMT
	I0717 22:04:49.695465   34695 round_trippers.go:580]     Audit-Id: fb61b90b-95cd-4d1e-99f4-c754bc0fd4ea
	I0717 22:04:49.695473   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:49.695482   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:49.695494   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:49.695508   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:49.695521   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:49.695683   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:50.192290   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:50.192314   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:50.192324   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:50.192330   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:50.195006   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:50.195034   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:50.195043   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:50.195048   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:50.195055   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:50.195061   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:50 GMT
	I0717 22:04:50.195067   34695 round_trippers.go:580]     Audit-Id: 40295a46-79c4-4ca7-9594-ac5f96b7f6a0
	I0717 22:04:50.195072   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:50.195077   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:50.195161   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:50.693097   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:50.693132   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:50.693141   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:50.693147   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:50.696846   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:50.696859   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:50.696865   34695 round_trippers.go:580]     Content-Length: 3640
	I0717 22:04:50.696873   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:50 GMT
	I0717 22:04:50.696881   34695 round_trippers.go:580]     Audit-Id: f27ddc5c-1cbc-4f07-abb2-39aed89e1d6b
	I0717 22:04:50.696890   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:50.696899   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:50.696908   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:50.696917   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:50.697215   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"503","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I0717 22:04:51.192909   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:51.192934   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.192945   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.192953   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.195487   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:51.195505   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.195512   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.195518   34695 round_trippers.go:580]     Audit-Id: cc391b62-1124-4920-9e36-9b212fc1fa5c
	I0717 22:04:51.195523   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.195528   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.195533   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.195539   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.195544   34695 round_trippers.go:580]     Content-Length: 3726
	I0717 22:04:51.195805   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"524","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I0717 22:04:51.196094   34695 node_ready.go:49] node "multinode-009530-m02" has status "Ready":"True"
	I0717 22:04:51.196110   34695 node_ready.go:38] duration metric: took 6.514604244s waiting for node "multinode-009530-m02" to be "Ready" ...
	I0717 22:04:51.196120   34695 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:04:51.196180   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:04:51.196191   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.196210   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.196222   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.199671   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:51.199683   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.199689   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.199695   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.199700   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.199706   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.199711   34695 round_trippers.go:580]     Audit-Id: e0baaca9-3c12-4947-9c79-5bc77b96d935
	I0717 22:04:51.199718   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.201766   34695 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"524"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"446","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67374 chars]
	I0717 22:04:51.203874   34695 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:51.203942   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:04:51.203953   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.203962   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.203968   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.206092   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:51.206105   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.206114   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.206123   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.206132   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.206141   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.206149   34695 round_trippers.go:580]     Audit-Id: 442e67da-e62c-4f11-b08e-cfb6d3239936
	I0717 22:04:51.206155   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.206284   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"446","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0717 22:04:51.206664   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:51.206676   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.206683   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.206689   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.208685   34695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:04:51.208696   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.208704   34695 round_trippers.go:580]     Audit-Id: a45f8dcc-400c-4b1f-ba04-e02e1c773890
	I0717 22:04:51.208709   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.208714   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.208719   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.208725   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.208731   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.209028   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:51.209389   34695 pod_ready.go:92] pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:51.209402   34695 pod_ready.go:81] duration metric: took 5.510083ms waiting for pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:51.209415   34695 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:51.209471   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-009530
	I0717 22:04:51.209482   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.209490   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.209500   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.211605   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:51.211618   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.211624   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.211630   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.211635   34695 round_trippers.go:580]     Audit-Id: 1c37666f-e8c3-4848-9f6e-bfe708c57348
	I0717 22:04:51.211640   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.211645   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.211651   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.211907   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-009530","namespace":"kube-system","uid":"aed75ad9-0156-4275-8a41-b68d09c15660","resourceVersion":"444","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.222:2379","kubernetes.io/config.hash":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.mirror":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.seen":"2023-07-17T22:03:52.473671520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0717 22:04:51.212227   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:51.212240   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.212252   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.212261   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.214126   34695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:04:51.214139   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.214145   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.214150   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.214155   34695 round_trippers.go:580]     Audit-Id: 5bbb091d-bef8-42c4-b73a-a7456a95d361
	I0717 22:04:51.214160   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.214167   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.214175   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.214465   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:51.214726   34695 pod_ready.go:92] pod "etcd-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:51.214739   34695 pod_ready.go:81] duration metric: took 5.317809ms waiting for pod "etcd-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:51.214752   34695 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:51.214800   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-009530
	I0717 22:04:51.214808   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.214815   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.214821   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.217089   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:51.217100   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.217106   34695 round_trippers.go:580]     Audit-Id: 0d57c870-ba55-4dcd-b141-32c4387ae58d
	I0717 22:04:51.217113   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.217119   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.217127   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.217136   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.217146   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.217276   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-009530","namespace":"kube-system","uid":"958b1550-f15f-49f3-acf3-dbab69f82fb8","resourceVersion":"442","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.222:8443","kubernetes.io/config.hash":"49e7615bd1aa66d6e32161e120c48180","kubernetes.io/config.mirror":"49e7615bd1aa66d6e32161e120c48180","kubernetes.io/config.seen":"2023-07-17T22:03:52.473675304Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0717 22:04:51.217656   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:51.217667   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.217674   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.217680   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.219312   34695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:04:51.219324   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.219330   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.219336   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.219341   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.219347   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.219356   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.219366   34695 round_trippers.go:580]     Audit-Id: 88ce67c4-d907-4267-977d-bf5c6b922d9d
	I0717 22:04:51.219600   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:51.219948   34695 pod_ready.go:92] pod "kube-apiserver-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:51.219962   34695 pod_ready.go:81] duration metric: took 5.201884ms waiting for pod "kube-apiserver-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:51.219973   34695 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:51.220024   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-009530
	I0717 22:04:51.220034   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.220042   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.220050   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.222001   34695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:04:51.222019   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.222026   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.222031   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.222036   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.222042   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.222049   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.222057   34695 round_trippers.go:580]     Audit-Id: c3857a7a-3882-46e2-85d5-d8f54a37b191
	I0717 22:04:51.222176   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-009530","namespace":"kube-system","uid":"1c9dba7c-6497-41f0-b751-17988278c710","resourceVersion":"443","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d8b61663949a18745a23bcf487c538f2","kubernetes.io/config.mirror":"d8b61663949a18745a23bcf487c538f2","kubernetes.io/config.seen":"2023-07-17T22:03:52.473676600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0717 22:04:51.222613   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:51.222631   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.222642   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.222650   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.224518   34695 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:04:51.224531   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.224537   34695 round_trippers.go:580]     Audit-Id: f92f0fab-6693-4281-9fa9-0a325474fd80
	I0717 22:04:51.224542   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.224547   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.224553   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.224558   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.224563   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.224713   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:51.225046   34695 pod_ready.go:92] pod "kube-controller-manager-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:51.225061   34695 pod_ready.go:81] duration metric: took 5.079108ms waiting for pod "kube-controller-manager-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:51.225071   34695 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6rxv8" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:51.393443   34695 request.go:628] Waited for 168.324033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rxv8
	I0717 22:04:51.393513   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rxv8
	I0717 22:04:51.393528   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.393541   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.393550   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.396685   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:51.396706   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.396713   34695 round_trippers.go:580]     Audit-Id: 83a7f93f-d20c-4a02-909a-868016975ed8
	I0717 22:04:51.396719   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.396724   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.396730   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.396739   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.396748   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.397201   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6rxv8","generateName":"kube-proxy-","namespace":"kube-system","uid":"0d197eb7-b5bd-446a-b2f4-c513c06afcbe","resourceVersion":"512","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0717 22:04:51.592937   34695 request.go:628] Waited for 195.319314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:51.592999   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:04:51.593008   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.593015   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.593021   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.595916   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:51.595935   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.595941   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.595947   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.595952   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.595957   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.595963   34695 round_trippers.go:580]     Content-Length: 3726
	I0717 22:04:51.595968   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.595974   34695 round_trippers.go:580]     Audit-Id: c3ec8569-80d0-4711-93db-2abc93c04c69
	I0717 22:04:51.596039   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"524","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I0717 22:04:51.596253   34695 pod_ready.go:92] pod "kube-proxy-6rxv8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:51.596266   34695 pod_ready.go:81] duration metric: took 371.189114ms waiting for pod "kube-proxy-6rxv8" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:51.596275   34695 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m5spw" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:51.793409   34695 request.go:628] Waited for 197.059009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5spw
	I0717 22:04:51.793475   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5spw
	I0717 22:04:51.793482   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.793493   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.793504   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.796651   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:51.796670   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.796677   34695 round_trippers.go:580]     Audit-Id: 76d63ca1-6667-40a0-9d8c-6c5ea7bb6ed7
	I0717 22:04:51.796683   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.796699   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.796704   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.796710   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.796715   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.796942   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m5spw","generateName":"kube-proxy-","namespace":"kube-system","uid":"a4bf0eb3-126a-463e-a670-b4793e1c5ec9","resourceVersion":"415","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 22:04:51.993742   34695 request.go:628] Waited for 196.397093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:51.993802   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:51.993807   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:51.993814   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:51.993825   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:51.996764   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:51.996787   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:51.996798   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:51.996806   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:51 GMT
	I0717 22:04:51.996815   34695 round_trippers.go:580]     Audit-Id: 056c719f-4eb3-4959-8ca6-88fd8fb235ed
	I0717 22:04:51.996828   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:51.996838   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:51.996851   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:51.997151   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:51.997504   34695 pod_ready.go:92] pod "kube-proxy-m5spw" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:51.997541   34695 pod_ready.go:81] duration metric: took 401.258241ms waiting for pod "kube-proxy-m5spw" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:51.997559   34695 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:52.192923   34695 request.go:628] Waited for 195.301498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-009530
	I0717 22:04:52.193000   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-009530
	I0717 22:04:52.193008   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:52.193018   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:52.193026   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:52.195828   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:52.195853   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:52.195862   34695 round_trippers.go:580]     Audit-Id: 894572e2-6dc6-48bd-ae71-6b82b8a182e9
	I0717 22:04:52.195869   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:52.195876   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:52.195884   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:52.195896   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:52.195908   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:52 GMT
	I0717 22:04:52.196073   34695 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-009530","namespace":"kube-system","uid":"5da85194-923d-40f6-ab44-86209b1f057d","resourceVersion":"441","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"036d300e0ec7bf28a26e0c644008bbd5","kubernetes.io/config.mirror":"036d300e0ec7bf28a26e0c644008bbd5","kubernetes.io/config.seen":"2023-07-17T22:03:52.473677561Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0717 22:04:52.393808   34695 request.go:628] Waited for 197.371506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:52.393867   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:04:52.393872   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:52.393879   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:52.393886   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:52.396566   34695 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:04:52.396586   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:52.396596   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:52.396604   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:52 GMT
	I0717 22:04:52.396612   34695 round_trippers.go:580]     Audit-Id: a3f65eac-97a9-49a8-b109-2689d4e35804
	I0717 22:04:52.396625   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:52.396644   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:52.396653   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:52.396986   34695 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0717 22:04:52.397275   34695 pod_ready.go:92] pod "kube-scheduler-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:04:52.397287   34695 pod_ready.go:81] duration metric: took 399.721271ms waiting for pod "kube-scheduler-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:04:52.397296   34695 pod_ready.go:38] duration metric: took 1.20116699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:04:52.397308   34695 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:04:52.397349   34695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:04:52.411118   34695 system_svc.go:56] duration metric: took 13.803402ms WaitForService to wait for kubelet.
	I0717 22:04:52.411143   34695 kubeadm.go:581] duration metric: took 7.998456807s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:04:52.411162   34695 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:04:52.593405   34695 request.go:628] Waited for 182.182487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes
	I0717 22:04:52.593467   34695 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes
	I0717 22:04:52.593472   34695 round_trippers.go:469] Request Headers:
	I0717 22:04:52.593480   34695 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:04:52.593486   34695 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:04:52.596802   34695 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:04:52.596828   34695 round_trippers.go:577] Response Headers:
	I0717 22:04:52.596839   34695 round_trippers.go:580]     Audit-Id: 8c394ee4-2aa8-4d1e-b459-f269f0d292c5
	I0717 22:04:52.596848   34695 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:04:52.596856   34695 round_trippers.go:580]     Content-Type: application/json
	I0717 22:04:52.596864   34695 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:04:52.596873   34695 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:04:52.596881   34695 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:04:52 GMT
	I0717 22:04:52.597046   34695 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"525"},"items":[{"metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"426","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9646 chars]
	I0717 22:04:52.597503   34695 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:04:52.597539   34695 node_conditions.go:123] node cpu capacity is 2
	I0717 22:04:52.597551   34695 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:04:52.597559   34695 node_conditions.go:123] node cpu capacity is 2
	I0717 22:04:52.597570   34695 node_conditions.go:105] duration metric: took 186.402464ms to run NodePressure ...
	I0717 22:04:52.597608   34695 start.go:228] waiting for startup goroutines ...
	I0717 22:04:52.597637   34695 start.go:242] writing updated cluster config ...
	I0717 22:04:52.597957   34695 ssh_runner.go:195] Run: rm -f paused
	I0717 22:04:52.645090   34695 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 22:04:52.648361   34695 out.go:177] * Done! kubectl is now configured to use "multinode-009530" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 22:03:19 UTC, ends at Mon 2023-07-17 22:04:59 UTC. --
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.796389938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5f18575a-7a8f-46e2-8535-7a5071261086 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.796735809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41f8f5d2038df12e4acf39cc6cd733e206ee3902e1b6b342ae389ee4316cff59,PodSandboxId:ed4abb148ed09c216e85ffd7edeae10978ef3de85132c38b89b09c5d28d27a2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689631495279773983,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c717490504ec94c54e5abd7d6598d772d57480f996931a70695725db2428f1,PodSandboxId:2895f4b824150d7e35c916117036df5d34a4d974a720798b58f72d2b95930712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689631452062957357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e53f54a64d686de9afc51e0ebe14ec2348aa102a035c15a5927c3aaf11a4519,PodSandboxId:b40bd65df0d8ea42de0491ca2ac766c90c34cbd894deddfc90c2e317de444a2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689631451785926606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182d214321207b9c76d2dc7bc7b359ae82cfcb56b7c95f06b133f9d9092b857c,PodSandboxId:c2ae255224b9575864c9a4f2471bb44b2acbb29b0253544f872da13a6d89d6fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689631448914564820,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4386a141e28a3b6942f47c866d899cd642a9e386ed50ec6b7b7009199ba7f62e,PodSandboxId:163a3d994c15b353c92ca622f5332a200d90d12e4409242709d5204a8f5f8e98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689631446987728065,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6100f3e2988f1145928d5ca3186b81eb1c0498a5be96b2e5dffc6d6107578d1f,PodSandboxId:db73e16434119f8e2d2cc7f247ebcc8209c9200a3a6bc5aefc0b81f10af05b98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689631425443868867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b656cd44b51979a4dcecb8599727f6135d8c62c85599f3a7c227775adbdc1de4,PodSandboxId:fbd2eafaa90f19070252d84bfdabafc47bc0106424e6af13565361b88eb592f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689631424920562899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 6ce2a6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2e42c13ba34bf9c483c81567627c96f1e6ae5c4f0cd8e053ba66e0881c2424,PodSandboxId:9cf91611c1dd6ea2ecaed7d539177471ff7ead3372f1d580b290542db5db3b82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689631424724309259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a725fb53f4f6f9bc5225db4e85a2e8b5e77715c19d0aa83420f986b7e4279c5,PodSandboxId:07577e6347183c54ad7199586791b308884c287e823c95ad1e1917440d25b707,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689631424573527164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.
container.hash: 2f313b3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5f18575a-7a8f-46e2-8535-7a5071261086 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.831383086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8eac390f-3b37-43da-8366-7c06795f7672 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.831443718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8eac390f-3b37-43da-8366-7c06795f7672 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.831748266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41f8f5d2038df12e4acf39cc6cd733e206ee3902e1b6b342ae389ee4316cff59,PodSandboxId:ed4abb148ed09c216e85ffd7edeae10978ef3de85132c38b89b09c5d28d27a2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689631495279773983,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c717490504ec94c54e5abd7d6598d772d57480f996931a70695725db2428f1,PodSandboxId:2895f4b824150d7e35c916117036df5d34a4d974a720798b58f72d2b95930712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689631452062957357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e53f54a64d686de9afc51e0ebe14ec2348aa102a035c15a5927c3aaf11a4519,PodSandboxId:b40bd65df0d8ea42de0491ca2ac766c90c34cbd894deddfc90c2e317de444a2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689631451785926606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182d214321207b9c76d2dc7bc7b359ae82cfcb56b7c95f06b133f9d9092b857c,PodSandboxId:c2ae255224b9575864c9a4f2471bb44b2acbb29b0253544f872da13a6d89d6fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689631448914564820,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4386a141e28a3b6942f47c866d899cd642a9e386ed50ec6b7b7009199ba7f62e,PodSandboxId:163a3d994c15b353c92ca622f5332a200d90d12e4409242709d5204a8f5f8e98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689631446987728065,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6100f3e2988f1145928d5ca3186b81eb1c0498a5be96b2e5dffc6d6107578d1f,PodSandboxId:db73e16434119f8e2d2cc7f247ebcc8209c9200a3a6bc5aefc0b81f10af05b98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689631425443868867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b656cd44b51979a4dcecb8599727f6135d8c62c85599f3a7c227775adbdc1de4,PodSandboxId:fbd2eafaa90f19070252d84bfdabafc47bc0106424e6af13565361b88eb592f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689631424920562899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 6ce2a6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2e42c13ba34bf9c483c81567627c96f1e6ae5c4f0cd8e053ba66e0881c2424,PodSandboxId:9cf91611c1dd6ea2ecaed7d539177471ff7ead3372f1d580b290542db5db3b82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689631424724309259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a725fb53f4f6f9bc5225db4e85a2e8b5e77715c19d0aa83420f986b7e4279c5,PodSandboxId:07577e6347183c54ad7199586791b308884c287e823c95ad1e1917440d25b707,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689631424573527164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.
container.hash: 2f313b3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8eac390f-3b37-43da-8366-7c06795f7672 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.859405075Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=d12f58ac-5087-4841-8593-72cae0cd9177 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.859769960Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ed4abb148ed09c216e85ffd7edeae10978ef3de85132c38b89b09c5d28d27a2a,Metadata:&PodSandboxMetadata{Name:busybox-67b7f59bb-p72ln,Uid:aecc37f7-73f7-490b-9b82-bf330600bf41,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689631493810132696,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,pod-template-hash: 67b7f59bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:04:53.473698148Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2895f4b824150d7e35c916117036df5d34a4d974a720798b58f72d2b95930712,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-z4fr8,Uid:1fb1d992-a7b6-4259-ba61-dc4092c65c44,Namespace:kube-system,Attempt:0,},
State:SANDBOX_READY,CreatedAt:1689631451355702525,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:04:11.004739938Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b40bd65df0d8ea42de0491ca2ac766c90c34cbd894deddfc90c2e317de444a2f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d8f48e9c-2b37-4edc-89e4-d032cac0d573,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689631451350656315,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]strin
g{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T22:04:11.013911387Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:163a3d994c15b353c92ca622f5332a200d90d12e4409242709d5204a8f5f8e98,Metadata:&PodSandboxMetadata{Name:kube-proxy-m5spw,Uid:a4bf0eb3-126a-463e-a670-b4793e1c5ec9,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1689631446608749520,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1c5ec9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:04:05.378895108Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2ae255224b9575864c9a4f2471bb44b2acbb29b0253544f872da13a6d89d6fb,Metadata:&PodSandboxMetadata{Name:kindnet-gh4hn,Uid:d474f5c5-bd94-411b-8d69-b3871c2b5653,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689631445821001565,Labels:map[string]string{app: kindnet,controller-revision-hash: 575d9d6996,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d474f5c5-bd94-411b-8d69-b3871c2b5653,k8s-app: kindnet,pod-template-generati
on: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:04:05.480922919Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:07577e6347183c54ad7199586791b308884c287e823c95ad1e1917440d25b707,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-009530,Uid:49e7615bd1aa66d6e32161e120c48180,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689631424144897434,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.222:8443,kubernetes.io/config.hash: 49e7615bd1aa66d6e32161e120c48180,kubernetes.io/config.seen: 2023-07-17T22:03:43.550004437Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fbd2eafaa90f19070252d84bfdabafc47bc
0106424e6af13565361b88eb592f5,Metadata:&PodSandboxMetadata{Name:etcd-multinode-009530,Uid:ab77d0bbc5cf528d40fb1d6635b3acda,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689631424138732202,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.222:2379,kubernetes.io/config.hash: ab77d0bbc5cf528d40fb1d6635b3acda,kubernetes.io/config.seen: 2023-07-17T22:03:43.550000328Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:db73e16434119f8e2d2cc7f247ebcc8209c9200a3a6bc5aefc0b81f10af05b98,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-009530,Uid:036d300e0ec7bf28a26e0c644008bbd5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689631424127565017,Labels:map[string]string{compo
nent: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 036d300e0ec7bf28a26e0c644008bbd5,kubernetes.io/config.seen: 2023-07-17T22:03:43.550006269Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9cf91611c1dd6ea2ecaed7d539177471ff7ead3372f1d580b290542db5db3b82,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-009530,Uid:d8b61663949a18745a23bcf487c538f2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689631424100301210,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,tier: control-plane,},Annotations:map[string]string{kubernet
es.io/config.hash: d8b61663949a18745a23bcf487c538f2,kubernetes.io/config.seen: 2023-07-17T22:03:43.550005565Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=d12f58ac-5087-4841-8593-72cae0cd9177 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.860491435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=39f0cbcf-87e9-4b63-9769-ade25657eb5f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.860544912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=39f0cbcf-87e9-4b63-9769-ade25657eb5f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.860874363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41f8f5d2038df12e4acf39cc6cd733e206ee3902e1b6b342ae389ee4316cff59,PodSandboxId:ed4abb148ed09c216e85ffd7edeae10978ef3de85132c38b89b09c5d28d27a2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689631495279773983,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c717490504ec94c54e5abd7d6598d772d57480f996931a70695725db2428f1,PodSandboxId:2895f4b824150d7e35c916117036df5d34a4d974a720798b58f72d2b95930712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689631452062957357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e53f54a64d686de9afc51e0ebe14ec2348aa102a035c15a5927c3aaf11a4519,PodSandboxId:b40bd65df0d8ea42de0491ca2ac766c90c34cbd894deddfc90c2e317de444a2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689631451785926606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182d214321207b9c76d2dc7bc7b359ae82cfcb56b7c95f06b133f9d9092b857c,PodSandboxId:c2ae255224b9575864c9a4f2471bb44b2acbb29b0253544f872da13a6d89d6fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689631448914564820,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4386a141e28a3b6942f47c866d899cd642a9e386ed50ec6b7b7009199ba7f62e,PodSandboxId:163a3d994c15b353c92ca622f5332a200d90d12e4409242709d5204a8f5f8e98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689631446987728065,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6100f3e2988f1145928d5ca3186b81eb1c0498a5be96b2e5dffc6d6107578d1f,PodSandboxId:db73e16434119f8e2d2cc7f247ebcc8209c9200a3a6bc5aefc0b81f10af05b98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689631425443868867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b656cd44b51979a4dcecb8599727f6135d8c62c85599f3a7c227775adbdc1de4,PodSandboxId:fbd2eafaa90f19070252d84bfdabafc47bc0106424e6af13565361b88eb592f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689631424920562899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 6ce2a6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2e42c13ba34bf9c483c81567627c96f1e6ae5c4f0cd8e053ba66e0881c2424,PodSandboxId:9cf91611c1dd6ea2ecaed7d539177471ff7ead3372f1d580b290542db5db3b82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689631424724309259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a725fb53f4f6f9bc5225db4e85a2e8b5e77715c19d0aa83420f986b7e4279c5,PodSandboxId:07577e6347183c54ad7199586791b308884c287e823c95ad1e1917440d25b707,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689631424573527164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.
container.hash: 2f313b3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=39f0cbcf-87e9-4b63-9769-ade25657eb5f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.870678707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a3b73be4-43cf-4159-85f6-dedddd9332e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.870745051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a3b73be4-43cf-4159-85f6-dedddd9332e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.870993638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41f8f5d2038df12e4acf39cc6cd733e206ee3902e1b6b342ae389ee4316cff59,PodSandboxId:ed4abb148ed09c216e85ffd7edeae10978ef3de85132c38b89b09c5d28d27a2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689631495279773983,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c717490504ec94c54e5abd7d6598d772d57480f996931a70695725db2428f1,PodSandboxId:2895f4b824150d7e35c916117036df5d34a4d974a720798b58f72d2b95930712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689631452062957357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e53f54a64d686de9afc51e0ebe14ec2348aa102a035c15a5927c3aaf11a4519,PodSandboxId:b40bd65df0d8ea42de0491ca2ac766c90c34cbd894deddfc90c2e317de444a2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689631451785926606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182d214321207b9c76d2dc7bc7b359ae82cfcb56b7c95f06b133f9d9092b857c,PodSandboxId:c2ae255224b9575864c9a4f2471bb44b2acbb29b0253544f872da13a6d89d6fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689631448914564820,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4386a141e28a3b6942f47c866d899cd642a9e386ed50ec6b7b7009199ba7f62e,PodSandboxId:163a3d994c15b353c92ca622f5332a200d90d12e4409242709d5204a8f5f8e98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689631446987728065,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6100f3e2988f1145928d5ca3186b81eb1c0498a5be96b2e5dffc6d6107578d1f,PodSandboxId:db73e16434119f8e2d2cc7f247ebcc8209c9200a3a6bc5aefc0b81f10af05b98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689631425443868867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b656cd44b51979a4dcecb8599727f6135d8c62c85599f3a7c227775adbdc1de4,PodSandboxId:fbd2eafaa90f19070252d84bfdabafc47bc0106424e6af13565361b88eb592f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689631424920562899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 6ce2a6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2e42c13ba34bf9c483c81567627c96f1e6ae5c4f0cd8e053ba66e0881c2424,PodSandboxId:9cf91611c1dd6ea2ecaed7d539177471ff7ead3372f1d580b290542db5db3b82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689631424724309259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a725fb53f4f6f9bc5225db4e85a2e8b5e77715c19d0aa83420f986b7e4279c5,PodSandboxId:07577e6347183c54ad7199586791b308884c287e823c95ad1e1917440d25b707,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689631424573527164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.
container.hash: 2f313b3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a3b73be4-43cf-4159-85f6-dedddd9332e5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.909709050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bb7b4b45-b340-48f2-a1aa-15b9bcc5d848 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.909778291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bb7b4b45-b340-48f2-a1aa-15b9bcc5d848 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.909984890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41f8f5d2038df12e4acf39cc6cd733e206ee3902e1b6b342ae389ee4316cff59,PodSandboxId:ed4abb148ed09c216e85ffd7edeae10978ef3de85132c38b89b09c5d28d27a2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689631495279773983,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c717490504ec94c54e5abd7d6598d772d57480f996931a70695725db2428f1,PodSandboxId:2895f4b824150d7e35c916117036df5d34a4d974a720798b58f72d2b95930712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689631452062957357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e53f54a64d686de9afc51e0ebe14ec2348aa102a035c15a5927c3aaf11a4519,PodSandboxId:b40bd65df0d8ea42de0491ca2ac766c90c34cbd894deddfc90c2e317de444a2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689631451785926606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182d214321207b9c76d2dc7bc7b359ae82cfcb56b7c95f06b133f9d9092b857c,PodSandboxId:c2ae255224b9575864c9a4f2471bb44b2acbb29b0253544f872da13a6d89d6fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689631448914564820,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4386a141e28a3b6942f47c866d899cd642a9e386ed50ec6b7b7009199ba7f62e,PodSandboxId:163a3d994c15b353c92ca622f5332a200d90d12e4409242709d5204a8f5f8e98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689631446987728065,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6100f3e2988f1145928d5ca3186b81eb1c0498a5be96b2e5dffc6d6107578d1f,PodSandboxId:db73e16434119f8e2d2cc7f247ebcc8209c9200a3a6bc5aefc0b81f10af05b98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689631425443868867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b656cd44b51979a4dcecb8599727f6135d8c62c85599f3a7c227775adbdc1de4,PodSandboxId:fbd2eafaa90f19070252d84bfdabafc47bc0106424e6af13565361b88eb592f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689631424920562899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 6ce2a6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2e42c13ba34bf9c483c81567627c96f1e6ae5c4f0cd8e053ba66e0881c2424,PodSandboxId:9cf91611c1dd6ea2ecaed7d539177471ff7ead3372f1d580b290542db5db3b82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689631424724309259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a725fb53f4f6f9bc5225db4e85a2e8b5e77715c19d0aa83420f986b7e4279c5,PodSandboxId:07577e6347183c54ad7199586791b308884c287e823c95ad1e1917440d25b707,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689631424573527164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.
container.hash: 2f313b3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bb7b4b45-b340-48f2-a1aa-15b9bcc5d848 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.945219733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6274e6ed-7ca1-4f96-a173-dda5c6d40824 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.945312013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6274e6ed-7ca1-4f96-a173-dda5c6d40824 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.945549299Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41f8f5d2038df12e4acf39cc6cd733e206ee3902e1b6b342ae389ee4316cff59,PodSandboxId:ed4abb148ed09c216e85ffd7edeae10978ef3de85132c38b89b09c5d28d27a2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689631495279773983,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c717490504ec94c54e5abd7d6598d772d57480f996931a70695725db2428f1,PodSandboxId:2895f4b824150d7e35c916117036df5d34a4d974a720798b58f72d2b95930712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689631452062957357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e53f54a64d686de9afc51e0ebe14ec2348aa102a035c15a5927c3aaf11a4519,PodSandboxId:b40bd65df0d8ea42de0491ca2ac766c90c34cbd894deddfc90c2e317de444a2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689631451785926606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182d214321207b9c76d2dc7bc7b359ae82cfcb56b7c95f06b133f9d9092b857c,PodSandboxId:c2ae255224b9575864c9a4f2471bb44b2acbb29b0253544f872da13a6d89d6fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689631448914564820,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4386a141e28a3b6942f47c866d899cd642a9e386ed50ec6b7b7009199ba7f62e,PodSandboxId:163a3d994c15b353c92ca622f5332a200d90d12e4409242709d5204a8f5f8e98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689631446987728065,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6100f3e2988f1145928d5ca3186b81eb1c0498a5be96b2e5dffc6d6107578d1f,PodSandboxId:db73e16434119f8e2d2cc7f247ebcc8209c9200a3a6bc5aefc0b81f10af05b98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689631425443868867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b656cd44b51979a4dcecb8599727f6135d8c62c85599f3a7c227775adbdc1de4,PodSandboxId:fbd2eafaa90f19070252d84bfdabafc47bc0106424e6af13565361b88eb592f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689631424920562899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 6ce2a6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2e42c13ba34bf9c483c81567627c96f1e6ae5c4f0cd8e053ba66e0881c2424,PodSandboxId:9cf91611c1dd6ea2ecaed7d539177471ff7ead3372f1d580b290542db5db3b82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689631424724309259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a725fb53f4f6f9bc5225db4e85a2e8b5e77715c19d0aa83420f986b7e4279c5,PodSandboxId:07577e6347183c54ad7199586791b308884c287e823c95ad1e1917440d25b707,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689631424573527164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.
container.hash: 2f313b3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6274e6ed-7ca1-4f96-a173-dda5c6d40824 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.978258990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=943b0ca5-4781-4550-9246-fe19dc3db8e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.978323235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=943b0ca5-4781-4550-9246-fe19dc3db8e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:58 multinode-009530 crio[719]: time="2023-07-17 22:04:58.978549958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41f8f5d2038df12e4acf39cc6cd733e206ee3902e1b6b342ae389ee4316cff59,PodSandboxId:ed4abb148ed09c216e85ffd7edeae10978ef3de85132c38b89b09c5d28d27a2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689631495279773983,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c717490504ec94c54e5abd7d6598d772d57480f996931a70695725db2428f1,PodSandboxId:2895f4b824150d7e35c916117036df5d34a4d974a720798b58f72d2b95930712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689631452062957357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e53f54a64d686de9afc51e0ebe14ec2348aa102a035c15a5927c3aaf11a4519,PodSandboxId:b40bd65df0d8ea42de0491ca2ac766c90c34cbd894deddfc90c2e317de444a2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689631451785926606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182d214321207b9c76d2dc7bc7b359ae82cfcb56b7c95f06b133f9d9092b857c,PodSandboxId:c2ae255224b9575864c9a4f2471bb44b2acbb29b0253544f872da13a6d89d6fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689631448914564820,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4386a141e28a3b6942f47c866d899cd642a9e386ed50ec6b7b7009199ba7f62e,PodSandboxId:163a3d994c15b353c92ca622f5332a200d90d12e4409242709d5204a8f5f8e98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689631446987728065,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6100f3e2988f1145928d5ca3186b81eb1c0498a5be96b2e5dffc6d6107578d1f,PodSandboxId:db73e16434119f8e2d2cc7f247ebcc8209c9200a3a6bc5aefc0b81f10af05b98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689631425443868867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b656cd44b51979a4dcecb8599727f6135d8c62c85599f3a7c227775adbdc1de4,PodSandboxId:fbd2eafaa90f19070252d84bfdabafc47bc0106424e6af13565361b88eb592f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689631424920562899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 6ce2a6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2e42c13ba34bf9c483c81567627c96f1e6ae5c4f0cd8e053ba66e0881c2424,PodSandboxId:9cf91611c1dd6ea2ecaed7d539177471ff7ead3372f1d580b290542db5db3b82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689631424724309259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a725fb53f4f6f9bc5225db4e85a2e8b5e77715c19d0aa83420f986b7e4279c5,PodSandboxId:07577e6347183c54ad7199586791b308884c287e823c95ad1e1917440d25b707,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689631424573527164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.
container.hash: 2f313b3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=943b0ca5-4781-4550-9246-fe19dc3db8e3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:59 multinode-009530 crio[719]: time="2023-07-17 22:04:59.012218408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=061b3c54-bb98-4263-ada9-512c15393843 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:59 multinode-009530 crio[719]: time="2023-07-17 22:04:59.012339415Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=061b3c54-bb98-4263-ada9-512c15393843 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:04:59 multinode-009530 crio[719]: time="2023-07-17 22:04:59.012544879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41f8f5d2038df12e4acf39cc6cd733e206ee3902e1b6b342ae389ee4316cff59,PodSandboxId:ed4abb148ed09c216e85ffd7edeae10978ef3de85132c38b89b09c5d28d27a2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689631495279773983,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c717490504ec94c54e5abd7d6598d772d57480f996931a70695725db2428f1,PodSandboxId:2895f4b824150d7e35c916117036df5d34a4d974a720798b58f72d2b95930712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689631452062957357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e53f54a64d686de9afc51e0ebe14ec2348aa102a035c15a5927c3aaf11a4519,PodSandboxId:b40bd65df0d8ea42de0491ca2ac766c90c34cbd894deddfc90c2e317de444a2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689631451785926606,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182d214321207b9c76d2dc7bc7b359ae82cfcb56b7c95f06b133f9d9092b857c,PodSandboxId:c2ae255224b9575864c9a4f2471bb44b2acbb29b0253544f872da13a6d89d6fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689631448914564820,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4386a141e28a3b6942f47c866d899cd642a9e386ed50ec6b7b7009199ba7f62e,PodSandboxId:163a3d994c15b353c92ca622f5332a200d90d12e4409242709d5204a8f5f8e98,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689631446987728065,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6100f3e2988f1145928d5ca3186b81eb1c0498a5be96b2e5dffc6d6107578d1f,PodSandboxId:db73e16434119f8e2d2cc7f247ebcc8209c9200a3a6bc5aefc0b81f10af05b98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689631425443868867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b656cd44b51979a4dcecb8599727f6135d8c62c85599f3a7c227775adbdc1de4,PodSandboxId:fbd2eafaa90f19070252d84bfdabafc47bc0106424e6af13565361b88eb592f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689631424920562899,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 6ce2a6aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2e42c13ba34bf9c483c81567627c96f1e6ae5c4f0cd8e053ba66e0881c2424,PodSandboxId:9cf91611c1dd6ea2ecaed7d539177471ff7ead3372f1d580b290542db5db3b82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689631424724309259,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a725fb53f4f6f9bc5225db4e85a2e8b5e77715c19d0aa83420f986b7e4279c5,PodSandboxId:07577e6347183c54ad7199586791b308884c287e823c95ad1e1917440d25b707,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689631424573527164,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.
container.hash: 2f313b3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=061b3c54-bb98-4263-ada9-512c15393843 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	41f8f5d2038df       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago        Running             busybox                   0                   ed4abb148ed09
	38c717490504e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      47 seconds ago       Running             coredns                   0                   2895f4b824150
	2e53f54a64d68       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      47 seconds ago       Running             storage-provisioner       0                   b40bd65df0d8e
	182d214321207       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      50 seconds ago       Running             kindnet-cni               0                   c2ae255224b95
	4386a141e28a3       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      52 seconds ago       Running             kube-proxy                0                   163a3d994c15b
	6100f3e2988f1       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      About a minute ago   Running             kube-scheduler            0                   db73e16434119
	b656cd44b5197       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      About a minute ago   Running             etcd                      0                   fbd2eafaa90f1
	bd2e42c13ba34       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      About a minute ago   Running             kube-controller-manager   0                   9cf91611c1dd6
	2a725fb53f4f6       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      About a minute ago   Running             kube-apiserver            0                   07577e6347183
	
	* 
	* ==> coredns [38c717490504ec94c54e5abd7d6598d772d57480f996931a70695725db2428f1] <==
	* [INFO] 10.244.1.2:51217 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126029s
	[INFO] 10.244.0.3:37273 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154807s
	[INFO] 10.244.0.3:52860 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002103587s
	[INFO] 10.244.0.3:41837 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107412s
	[INFO] 10.244.0.3:43368 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070609s
	[INFO] 10.244.0.3:44009 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00146874s
	[INFO] 10.244.0.3:37827 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158233s
	[INFO] 10.244.0.3:32917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010319s
	[INFO] 10.244.0.3:36855 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096304s
	[INFO] 10.244.1.2:45139 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015411s
	[INFO] 10.244.1.2:32920 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016946s
	[INFO] 10.244.1.2:35592 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114462s
	[INFO] 10.244.1.2:52016 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110876s
	[INFO] 10.244.0.3:57331 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107345s
	[INFO] 10.244.0.3:55974 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097209s
	[INFO] 10.244.0.3:53398 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075142s
	[INFO] 10.244.0.3:36178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156755s
	[INFO] 10.244.1.2:41671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173821s
	[INFO] 10.244.1.2:56465 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000244826s
	[INFO] 10.244.1.2:51305 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000274539s
	[INFO] 10.244.1.2:54042 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000218975s
	[INFO] 10.244.0.3:44338 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132164s
	[INFO] 10.244.0.3:37251 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130167s
	[INFO] 10.244.0.3:58492 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077402s
	[INFO] 10.244.0.3:54843 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074741s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-009530
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-009530
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=multinode-009530
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_03_53_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:03:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-009530
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:04:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:04:10 +0000   Mon, 17 Jul 2023 22:03:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:04:10 +0000   Mon, 17 Jul 2023 22:03:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:04:10 +0000   Mon, 17 Jul 2023 22:03:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:04:10 +0000   Mon, 17 Jul 2023 22:04:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    multinode-009530
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a35bd1412ac04609a53c53355ebc2b8a
	  System UUID:                a35bd141-2ac0-4609-a53c-53355ebc2b8a
	  Boot ID:                    103fd4e9-ffb3-4107-864d-0c773127a188
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-p72ln                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5d78c9869d-z4fr8                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     54s
	  kube-system                 etcd-multinode-009530                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         67s
	  kube-system                 kindnet-gh4hn                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      54s
	  kube-system                 kube-apiserver-multinode-009530             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-controller-manager-multinode-009530    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-proxy-m5spw                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-scheduler-multinode-009530             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)  kubelet          Node multinode-009530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)  kubelet          Node multinode-009530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x7 over 76s)  kubelet          Node multinode-009530 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 67s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  67s                kubelet          Node multinode-009530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s                kubelet          Node multinode-009530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s                kubelet          Node multinode-009530 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  67s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           55s                node-controller  Node multinode-009530 event: Registered Node multinode-009530 in Controller
	  Normal  NodeReady                49s                kubelet          Node multinode-009530 status is now: NodeReady
	
	
	Name:               multinode-009530-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-009530-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:04:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-009530-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:04:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:04:51 +0000   Mon, 17 Jul 2023 22:04:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:04:51 +0000   Mon, 17 Jul 2023 22:04:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:04:51 +0000   Mon, 17 Jul 2023 22:04:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:04:51 +0000   Mon, 17 Jul 2023 22:04:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    multinode-009530-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2dd9efaa1b6645328dd273aa339fce67
	  System UUID:                2dd9efaa-1b66-4532-8dd2-73aa339fce67
	  Boot ID:                    35cdeb57-f454-489f-afe5-67bf46ef891c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-58859    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-4tb65              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16s
	  kube-system                 kube-proxy-6rxv8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  NodeHasSufficientMemory  16s (x5 over 18s)  kubelet          Node multinode-009530-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16s (x5 over 18s)  kubelet          Node multinode-009530-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16s (x5 over 18s)  kubelet          Node multinode-009530-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15s                node-controller  Node multinode-009530-m02 event: Registered Node multinode-009530-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-009530-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Jul17 22:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071605] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.304717] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.522116] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153874] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.032184] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.124990] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.106429] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.139901] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.108864] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.221475] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +9.176264] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[  +9.280357] systemd-fstab-generator[1261]: Ignoring "noauto" for root device
	[Jul17 22:04] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [b656cd44b51979a4dcecb8599727f6135d8c62c85599f3a7c227775adbdc1de4] <==
	* {"level":"info","ts":"2023-07-17T22:03:46.694Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","added-peer-id":"d8a7e113a49009a2","added-peer-peer-urls":["https://192.168.39.222:2380"]}
	{"level":"info","ts":"2023-07-17T22:03:46.695Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T22:03:46.695Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"d8a7e113a49009a2","initial-advertise-peer-urls":["https://192.168.39.222:2380"],"listen-peer-urls":["https://192.168.39.222:2380"],"advertise-client-urls":["https://192.168.39.222:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.222:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T22:03:46.695Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T22:03:46.695Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2023-07-17T22:03:46.695Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2023-07-17T22:03:47.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-17T22:03:47.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T22:03:47.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgPreVoteResp from d8a7e113a49009a2 at term 1"}
	{"level":"info","ts":"2023-07-17T22:03:47.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T22:03:47.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgVoteResp from d8a7e113a49009a2 at term 2"}
	{"level":"info","ts":"2023-07-17T22:03:47.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became leader at term 2"}
	{"level":"info","ts":"2023-07-17T22:03:47.328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d8a7e113a49009a2 elected leader d8a7e113a49009a2 at term 2"}
	{"level":"info","ts":"2023-07-17T22:03:47.330Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:03:47.331Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d8a7e113a49009a2","local-member-attributes":"{Name:multinode-009530 ClientURLs:[https://192.168.39.222:2379]}","request-path":"/0/members/d8a7e113a49009a2/attributes","cluster-id":"26257d506d5fabfb","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:03:47.331Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:03:47.332Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:03:47.332Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:03:47.332Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:03:47.332Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:03:47.333Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T22:03:47.344Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.222:2379"}
	{"level":"info","ts":"2023-07-17T22:03:47.345Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:03:47.345Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T22:04:44.686Z","caller":"traceutil/trace.go:171","msg":"trace[2023653191] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"205.793684ms","start":"2023-07-17T22:04:44.480Z","end":"2023-07-17T22:04:44.686Z","steps":["trace[2023653191] 'process raft request'  (duration: 204.278646ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  22:04:59 up 1 min,  0 users,  load average: 0.39, 0.18, 0.07
	Linux multinode-009530 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [182d214321207b9c76d2dc7bc7b359ae82cfcb56b7c95f06b133f9d9092b857c] <==
	* I0717 22:04:09.773044       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0717 22:04:09.773212       1 main.go:107] hostIP = 192.168.39.222
	podIP = 192.168.39.222
	I0717 22:04:09.773521       1 main.go:116] setting mtu 1500 for CNI 
	I0717 22:04:09.773641       1 main.go:146] kindnetd IP family: "ipv4"
	I0717 22:04:09.858463       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0717 22:04:10.459180       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0717 22:04:10.459267       1 main.go:227] handling current node
	I0717 22:04:20.471278       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0717 22:04:20.471323       1 main.go:227] handling current node
	I0717 22:04:30.478570       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0717 22:04:30.478713       1 main.go:227] handling current node
	I0717 22:04:40.486750       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0717 22:04:40.486943       1 main.go:227] handling current node
	I0717 22:04:50.500328       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0717 22:04:50.500426       1 main.go:227] handling current node
	I0717 22:04:50.500452       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0717 22:04:50.500469       1 main.go:250] Node multinode-009530-m02 has CIDR [10.244.1.0/24] 
	I0717 22:04:50.500821       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.146 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [2a725fb53f4f6f9bc5225db4e85a2e8b5e77715c19d0aa83420f986b7e4279c5] <==
	* I0717 22:03:49.005983       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 22:03:49.019383       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0717 22:03:49.020784       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 22:03:49.023092       1 shared_informer.go:318] Caches are synced for configmaps
	I0717 22:03:49.027242       1 controller.go:624] quota admission added evaluator for: namespaces
	I0717 22:03:49.035133       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0717 22:03:49.035181       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	E0717 22:03:49.060937       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0717 22:03:49.265341       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 22:03:49.522184       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 22:03:49.820988       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 22:03:49.832776       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 22:03:49.832819       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 22:03:50.459289       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 22:03:50.517176       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 22:03:50.598849       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0717 22:03:50.610447       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.39.222]
	I0717 22:03:50.611337       1 controller.go:624] quota admission added evaluator for: endpoints
	I0717 22:03:50.622536       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 22:03:50.980929       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 22:03:52.344219       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 22:03:52.368973       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0717 22:03:52.393702       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 22:04:05.103070       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0717 22:04:05.200095       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [bd2e42c13ba34bf9c483c81567627c96f1e6ae5c4f0cd8e053ba66e0881c2424] <==
	* I0717 22:04:04.429817       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-009530" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0717 22:04:04.440901       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-009530" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0717 22:04:04.505156       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 22:04:04.836888       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 22:04:04.849770       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 22:04:04.849849       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 22:04:05.111341       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0717 22:04:05.168903       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0717 22:04:05.311406       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-m5spw"
	I0717 22:04:05.378553       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gh4hn"
	I0717 22:04:05.527704       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-65hxr"
	I0717 22:04:05.585728       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-z4fr8"
	I0717 22:04:05.697216       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-65hxr"
	I0717 22:04:14.408874       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0717 22:04:43.222926       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-009530-m02\" does not exist"
	I0717 22:04:43.247214       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-009530-m02" podCIDRs=[10.244.1.0/24]
	I0717 22:04:43.248125       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4tb65"
	I0717 22:04:43.248171       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6rxv8"
	I0717 22:04:44.413877       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-009530-m02"
	I0717 22:04:44.414141       1 event.go:307] "Event occurred" object="multinode-009530-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-009530-m02 event: Registered Node multinode-009530-m02 in Controller"
	W0717 22:04:51.169694       1 topologycache.go:232] Can't get CPU or zone information for multinode-009530-m02 node
	I0717 22:04:53.404648       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0717 22:04:53.423000       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-58859"
	I0717 22:04:53.442104       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-p72ln"
	I0717 22:04:54.425944       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-58859" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-58859"
	
	* 
	* ==> kube-proxy [4386a141e28a3b6942f47c866d899cd642a9e386ed50ec6b7b7009199ba7f62e] <==
	* I0717 22:04:07.220850       1 node.go:141] Successfully retrieved node IP: 192.168.39.222
	I0717 22:04:07.220953       1 server_others.go:110] "Detected node IP" address="192.168.39.222"
	I0717 22:04:07.221002       1 server_others.go:554] "Using iptables proxy"
	I0717 22:04:07.264839       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 22:04:07.264889       1 server_others.go:192] "Using iptables Proxier"
	I0717 22:04:07.265295       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 22:04:07.266359       1 server.go:658] "Version info" version="v1.27.3"
	I0717 22:04:07.266421       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:04:07.269690       1 config.go:188] "Starting service config controller"
	I0717 22:04:07.269974       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 22:04:07.270261       1 config.go:97] "Starting endpoint slice config controller"
	I0717 22:04:07.270295       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 22:04:07.272260       1 config.go:315] "Starting node config controller"
	I0717 22:04:07.272294       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 22:04:07.370546       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 22:04:07.370795       1 shared_informer.go:318] Caches are synced for service config
	I0717 22:04:07.372709       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6100f3e2988f1145928d5ca3186b81eb1c0498a5be96b2e5dffc6d6107578d1f] <==
	* W0717 22:03:49.031986       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:03:49.035663       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:03:49.035975       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 22:03:49.036474       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 22:03:49.037345       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 22:03:49.037702       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 22:03:49.927982       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:03:49.928218       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 22:03:50.014443       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:03:50.014531       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 22:03:50.028147       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 22:03:50.028225       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 22:03:50.061484       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 22:03:50.061536       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 22:03:50.073092       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:03:50.073178       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 22:03:50.132708       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 22:03:50.132761       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 22:03:50.151564       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 22:03:50.151718       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 22:03:50.209788       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:03:50.209870       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 22:03:50.216533       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 22:03:50.216679       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0717 22:03:52.306300       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 22:03:19 UTC, ends at Mon 2023-07-17 22:04:59 UTC. --
	Jul 17 22:04:05 multinode-009530 kubelet[1268]: I0717 22:04:05.485648    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cwn7\" (UniqueName: \"kubernetes.io/projected/a4bf0eb3-126a-463e-a670-b4793e1c5ec9-kube-api-access-2cwn7\") pod \"kube-proxy-m5spw\" (UID: \"a4bf0eb3-126a-463e-a670-b4793e1c5ec9\") " pod="kube-system/kube-proxy-m5spw"
	Jul 17 22:04:05 multinode-009530 kubelet[1268]: I0717 22:04:05.485672    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a4bf0eb3-126a-463e-a670-b4793e1c5ec9-kube-proxy\") pod \"kube-proxy-m5spw\" (UID: \"a4bf0eb3-126a-463e-a670-b4793e1c5ec9\") " pod="kube-system/kube-proxy-m5spw"
	Jul 17 22:04:05 multinode-009530 kubelet[1268]: I0717 22:04:05.485713    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4bf0eb3-126a-463e-a670-b4793e1c5ec9-xtables-lock\") pod \"kube-proxy-m5spw\" (UID: \"a4bf0eb3-126a-463e-a670-b4793e1c5ec9\") " pod="kube-system/kube-proxy-m5spw"
	Jul 17 22:04:05 multinode-009530 kubelet[1268]: I0717 22:04:05.588137    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq548\" (UniqueName: \"kubernetes.io/projected/d474f5c5-bd94-411b-8d69-b3871c2b5653-kube-api-access-zq548\") pod \"kindnet-gh4hn\" (UID: \"d474f5c5-bd94-411b-8d69-b3871c2b5653\") " pod="kube-system/kindnet-gh4hn"
	Jul 17 22:04:05 multinode-009530 kubelet[1268]: I0717 22:04:05.588206    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d474f5c5-bd94-411b-8d69-b3871c2b5653-lib-modules\") pod \"kindnet-gh4hn\" (UID: \"d474f5c5-bd94-411b-8d69-b3871c2b5653\") " pod="kube-system/kindnet-gh4hn"
	Jul 17 22:04:05 multinode-009530 kubelet[1268]: I0717 22:04:05.588249    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d474f5c5-bd94-411b-8d69-b3871c2b5653-cni-cfg\") pod \"kindnet-gh4hn\" (UID: \"d474f5c5-bd94-411b-8d69-b3871c2b5653\") " pod="kube-system/kindnet-gh4hn"
	Jul 17 22:04:05 multinode-009530 kubelet[1268]: I0717 22:04:05.588276    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d474f5c5-bd94-411b-8d69-b3871c2b5653-xtables-lock\") pod \"kindnet-gh4hn\" (UID: \"d474f5c5-bd94-411b-8d69-b3871c2b5653\") " pod="kube-system/kindnet-gh4hn"
	Jul 17 22:04:07 multinode-009530 kubelet[1268]: I0717 22:04:07.692827    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-m5spw" podStartSLOduration=2.692796777 podCreationTimestamp="2023-07-17 22:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 22:04:07.691751733 +0000 UTC m=+15.365194512" watchObservedRunningTime="2023-07-17 22:04:07.692796777 +0000 UTC m=+15.366239556"
	Jul 17 22:04:10 multinode-009530 kubelet[1268]: I0717 22:04:10.963948    1268 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 17 22:04:11 multinode-009530 kubelet[1268]: I0717 22:04:11.004943    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-gh4hn" podStartSLOduration=6.0049094 podCreationTimestamp="2023-07-17 22:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 22:04:09.702630304 +0000 UTC m=+17.376073083" watchObservedRunningTime="2023-07-17 22:04:11.0049094 +0000 UTC m=+18.678352189"
	Jul 17 22:04:11 multinode-009530 kubelet[1268]: I0717 22:04:11.005045    1268 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 22:04:11 multinode-009530 kubelet[1268]: I0717 22:04:11.014047    1268 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 22:04:11 multinode-009530 kubelet[1268]: I0717 22:04:11.028916    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fb1d992-a7b6-4259-ba61-dc4092c65c44-config-volume\") pod \"coredns-5d78c9869d-z4fr8\" (UID: \"1fb1d992-a7b6-4259-ba61-dc4092c65c44\") " pod="kube-system/coredns-5d78c9869d-z4fr8"
	Jul 17 22:04:11 multinode-009530 kubelet[1268]: I0717 22:04:11.028955    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqkkm\" (UniqueName: \"kubernetes.io/projected/d8f48e9c-2b37-4edc-89e4-d032cac0d573-kube-api-access-tqkkm\") pod \"storage-provisioner\" (UID: \"d8f48e9c-2b37-4edc-89e4-d032cac0d573\") " pod="kube-system/storage-provisioner"
	Jul 17 22:04:11 multinode-009530 kubelet[1268]: I0717 22:04:11.028978    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg65g\" (UniqueName: \"kubernetes.io/projected/1fb1d992-a7b6-4259-ba61-dc4092c65c44-kube-api-access-bg65g\") pod \"coredns-5d78c9869d-z4fr8\" (UID: \"1fb1d992-a7b6-4259-ba61-dc4092c65c44\") " pod="kube-system/coredns-5d78c9869d-z4fr8"
	Jul 17 22:04:11 multinode-009530 kubelet[1268]: I0717 22:04:11.029003    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d8f48e9c-2b37-4edc-89e4-d032cac0d573-tmp\") pod \"storage-provisioner\" (UID: \"d8f48e9c-2b37-4edc-89e4-d032cac0d573\") " pod="kube-system/storage-provisioner"
	Jul 17 22:04:12 multinode-009530 kubelet[1268]: I0717 22:04:12.710420    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.710386413 podCreationTimestamp="2023-07-17 22:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 22:04:12.709798986 +0000 UTC m=+20.383241766" watchObservedRunningTime="2023-07-17 22:04:12.710386413 +0000 UTC m=+20.383829192"
	Jul 17 22:04:52 multinode-009530 kubelet[1268]: E0717 22:04:52.596872    1268 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 22:04:52 multinode-009530 kubelet[1268]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 22:04:52 multinode-009530 kubelet[1268]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 22:04:52 multinode-009530 kubelet[1268]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 22:04:53 multinode-009530 kubelet[1268]: I0717 22:04:53.473516    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-z4fr8" podStartSLOduration=48.473477352 podCreationTimestamp="2023-07-17 22:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 22:04:12.734362049 +0000 UTC m=+20.407804829" watchObservedRunningTime="2023-07-17 22:04:53.473477352 +0000 UTC m=+61.146920129"
	Jul 17 22:04:53 multinode-009530 kubelet[1268]: I0717 22:04:53.474020    1268 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 22:04:53 multinode-009530 kubelet[1268]: I0717 22:04:53.575124    1268 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghzxp\" (UniqueName: \"kubernetes.io/projected/aecc37f7-73f7-490b-9b82-bf330600bf41-kube-api-access-ghzxp\") pod \"busybox-67b7f59bb-p72ln\" (UID: \"aecc37f7-73f7-490b-9b82-bf330600bf41\") " pod="default/busybox-67b7f59bb-p72ln"
	Jul 17 22:04:55 multinode-009530 kubelet[1268]: I0717 22:04:55.867150    1268 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-p72ln" podStartSLOduration=1.999224103 podCreationTimestamp="2023-07-17 22:04:53 +0000 UTC" firstStartedPulling="2023-07-17 22:04:54.383564173 +0000 UTC m=+62.057006954" lastFinishedPulling="2023-07-17 22:04:55.251415548 +0000 UTC m=+62.924858539" observedRunningTime="2023-07-17 22:04:55.866178276 +0000 UTC m=+63.539621048" watchObservedRunningTime="2023-07-17 22:04:55.867075688 +0000 UTC m=+63.540518470"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-009530 -n multinode-009530
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-009530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (681.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-009530
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-009530
E0717 22:07:28.102389   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 22:08:11.892770   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-009530: exit status 82 (2m1.268454113s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-009530"  ...
	* Stopping node "multinode-009530"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-009530" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-009530 --wait=true -v=8 --alsologtostderr
E0717 22:09:34.939520   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 22:10:31.747151   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:12:28.101623   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 22:13:11.892961   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 22:13:51.146202   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 22:15:31.748660   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:16:54.795780   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:17:28.101670   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-009530 --wait=true -v=8 --alsologtostderr: (9m17.476400183s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-009530
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-009530 -n multinode-009530
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-009530 logs -n 25: (1.540586277s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-009530 ssh -n                                                                 | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-009530 cp multinode-009530-m02:/home/docker/cp-test.txt                       | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile170704396/001/cp-test_multinode-009530-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-009530 ssh -n                                                                 | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-009530 cp multinode-009530-m02:/home/docker/cp-test.txt                       | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530:/home/docker/cp-test_multinode-009530-m02_multinode-009530.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-009530 ssh -n                                                                 | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-009530 ssh -n multinode-009530 sudo cat                                       | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | /home/docker/cp-test_multinode-009530-m02_multinode-009530.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-009530 cp multinode-009530-m02:/home/docker/cp-test.txt                       | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530-m03:/home/docker/cp-test_multinode-009530-m02_multinode-009530-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-009530 ssh -n                                                                 | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-009530 ssh -n multinode-009530-m03 sudo cat                                   | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | /home/docker/cp-test_multinode-009530-m02_multinode-009530-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-009530 cp testdata/cp-test.txt                                                | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-009530 ssh -n                                                                 | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-009530 cp multinode-009530-m03:/home/docker/cp-test.txt                       | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile170704396/001/cp-test_multinode-009530-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-009530 ssh -n                                                                 | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-009530 cp multinode-009530-m03:/home/docker/cp-test.txt                       | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530:/home/docker/cp-test_multinode-009530-m03_multinode-009530.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-009530 ssh -n                                                                 | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-009530 ssh -n multinode-009530 sudo cat                                       | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | /home/docker/cp-test_multinode-009530-m03_multinode-009530.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-009530 cp multinode-009530-m03:/home/docker/cp-test.txt                       | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530-m02:/home/docker/cp-test_multinode-009530-m03_multinode-009530-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-009530 ssh -n                                                                 | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | multinode-009530-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-009530 ssh -n multinode-009530-m02 sudo cat                                   | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	|         | /home/docker/cp-test_multinode-009530-m03_multinode-009530-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-009530 node stop m03                                                          | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:05 UTC |
	| node    | multinode-009530 node start                                                             | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:05 UTC | 17 Jul 23 22:06 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-009530                                                                | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:06 UTC |                     |
	| stop    | -p multinode-009530                                                                     | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:06 UTC |                     |
	| start   | -p multinode-009530                                                                     | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:08 UTC | 17 Jul 23 22:17 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-009530                                                                | multinode-009530 | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:08:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:08:22.298752   37994 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:08:22.298883   37994 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:08:22.298893   37994 out.go:309] Setting ErrFile to fd 2...
	I0717 22:08:22.298900   37994 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:08:22.299099   37994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:08:22.299677   37994 out.go:303] Setting JSON to false
	I0717 22:08:22.300526   37994 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6654,"bootTime":1689625048,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:08:22.300588   37994 start.go:138] virtualization: kvm guest
	I0717 22:08:22.303837   37994 out.go:177] * [multinode-009530] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:08:22.305914   37994 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:08:22.307280   37994 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:08:22.305938   37994 notify.go:220] Checking for updates...
	I0717 22:08:22.310380   37994 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:08:22.311816   37994 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:08:22.313234   37994 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:08:22.314679   37994 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:08:22.316594   37994 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:08:22.316682   37994 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:08:22.317079   37994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:08:22.317165   37994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:08:22.341473   37994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36965
	I0717 22:08:22.341884   37994 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:08:22.342415   37994 main.go:141] libmachine: Using API Version  1
	I0717 22:08:22.342436   37994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:08:22.342753   37994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:08:22.342932   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:08:22.377188   37994 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 22:08:22.378494   37994 start.go:298] selected driver: kvm2
	I0717 22:08:22.378507   37994 start.go:880] validating driver "kvm2" against &{Name:multinode-009530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-00953
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.146 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.205 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0}
	I0717 22:08:22.378632   37994 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:08:22.378929   37994 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:08:22.378989   37994 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 22:08:22.393463   37994 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 22:08:22.394117   37994 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 22:08:22.394148   37994 cni.go:84] Creating CNI manager for ""
	I0717 22:08:22.394160   37994 cni.go:137] 3 nodes found, recommending kindnet
	I0717 22:08:22.394170   37994 start_flags.go:319] config:
	{Name:multinode-009530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-009530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.146 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.205 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:fa
lse metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:08:22.394360   37994 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:08:22.396350   37994 out.go:177] * Starting control plane node multinode-009530 in cluster multinode-009530
	I0717 22:08:22.397744   37994 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:08:22.397779   37994 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 22:08:22.397785   37994 cache.go:57] Caching tarball of preloaded images
	I0717 22:08:22.397883   37994 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 22:08:22.397898   37994 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 22:08:22.398028   37994 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/config.json ...
	I0717 22:08:22.398212   37994 start.go:365] acquiring machines lock for multinode-009530: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:08:22.398253   37994 start.go:369] acquired machines lock for "multinode-009530" in 22.535µs
	I0717 22:08:22.398270   37994 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:08:22.398279   37994 fix.go:54] fixHost starting: 
	I0717 22:08:22.398535   37994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:08:22.398578   37994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:08:22.412256   37994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33041
	I0717 22:08:22.412624   37994 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:08:22.413056   37994 main.go:141] libmachine: Using API Version  1
	I0717 22:08:22.413081   37994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:08:22.413395   37994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:08:22.413596   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:08:22.413783   37994 main.go:141] libmachine: (multinode-009530) Calling .GetState
	I0717 22:08:22.415345   37994 fix.go:102] recreateIfNeeded on multinode-009530: state=Running err=<nil>
	W0717 22:08:22.415375   37994 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:08:22.418417   37994 out.go:177] * Updating the running kvm2 "multinode-009530" VM ...
	I0717 22:08:22.419877   37994 machine.go:88] provisioning docker machine ...
	I0717 22:08:22.419894   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:08:22.420089   37994 main.go:141] libmachine: (multinode-009530) Calling .GetMachineName
	I0717 22:08:22.420242   37994 buildroot.go:166] provisioning hostname "multinode-009530"
	I0717 22:08:22.420255   37994 main.go:141] libmachine: (multinode-009530) Calling .GetMachineName
	I0717 22:08:22.420357   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:08:22.422548   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:08:22.423006   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:08:22.423033   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:08:22.423184   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:08:22.423337   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:08:22.423506   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:08:22.423651   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:08:22.423833   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:08:22.424493   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0717 22:08:22.424520   37994 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-009530 && echo "multinode-009530" | sudo tee /etc/hostname
	I0717 22:08:40.881883   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:08:46.961862   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:08:50.033838   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:08:56.113888   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:08:59.185747   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:09:05.265814   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:09:08.337782   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:09:14.417762   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:09:17.489784   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:09:23.569840   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:09:26.641752   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:09:32.721804   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:09:35.793790   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:09:41.873854   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:09:44.945767   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:09:51.025821   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:09:54.097807   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:00.177773   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:03.253827   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:09.329833   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:12.401786   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:18.481825   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:21.553888   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:27.633743   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:30.705821   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:36.785825   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:39.857820   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:45.937885   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:49.009871   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:55.089818   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:10:58.161782   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:04.241761   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:07.313818   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:13.393832   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:16.465848   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:22.545862   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:25.617833   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:31.697827   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:34.769787   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:40.849930   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:43.921783   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:50.001775   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:53.073796   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:11:59.153750   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:02.225844   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:08.305825   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:11.377774   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:17.457740   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:20.529734   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:26.609788   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:29.681812   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:35.761824   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:38.833808   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:44.913818   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:47.985813   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:54.065825   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:12:57.137771   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:13:03.217850   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:13:06.289813   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:13:12.369767   37994 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.222:22: connect: no route to host
	I0717 22:13:15.371925   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:13:15.371957   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:13:15.373815   37994 machine.go:91] provisioned docker machine in 4m52.953922058s
	I0717 22:13:15.373858   37994 fix.go:56] fixHost completed within 4m52.975579086s
	I0717 22:13:15.373865   37994 start.go:83] releasing machines lock for "multinode-009530", held for 4m52.9756014s
	W0717 22:13:15.373884   37994 start.go:672] error starting host: provision: host is not running
	W0717 22:13:15.374032   37994 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 22:13:15.374043   37994 start.go:687] Will try again in 5 seconds ...
	I0717 22:13:20.377014   37994 start.go:365] acquiring machines lock for multinode-009530: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:13:20.377155   37994 start.go:369] acquired machines lock for "multinode-009530" in 82.299µs
	I0717 22:13:20.377187   37994 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:13:20.377195   37994 fix.go:54] fixHost starting: 
	I0717 22:13:20.377508   37994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:13:20.377575   37994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:13:20.392531   37994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40719
	I0717 22:13:20.392982   37994 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:13:20.393492   37994 main.go:141] libmachine: Using API Version  1
	I0717 22:13:20.393535   37994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:13:20.393827   37994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:13:20.394043   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:13:20.394209   37994 main.go:141] libmachine: (multinode-009530) Calling .GetState
	I0717 22:13:20.395860   37994 fix.go:102] recreateIfNeeded on multinode-009530: state=Stopped err=<nil>
	I0717 22:13:20.395880   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	W0717 22:13:20.396061   37994 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:13:20.398919   37994 out.go:177] * Restarting existing kvm2 VM for "multinode-009530" ...
	I0717 22:13:20.400440   37994 main.go:141] libmachine: (multinode-009530) Calling .Start
	I0717 22:13:20.400667   37994 main.go:141] libmachine: (multinode-009530) Ensuring networks are active...
	I0717 22:13:20.401460   37994 main.go:141] libmachine: (multinode-009530) Ensuring network default is active
	I0717 22:13:20.401871   37994 main.go:141] libmachine: (multinode-009530) Ensuring network mk-multinode-009530 is active
	I0717 22:13:20.402218   37994 main.go:141] libmachine: (multinode-009530) Getting domain xml...
	I0717 22:13:20.402975   37994 main.go:141] libmachine: (multinode-009530) Creating domain...
	I0717 22:13:20.763160   37994 main.go:141] libmachine: (multinode-009530) Waiting to get IP...
	I0717 22:13:20.764038   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:20.764594   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:20.764680   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:20.764580   38817 retry.go:31] will retry after 237.506366ms: waiting for machine to come up
	I0717 22:13:21.004038   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:21.004482   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:21.004505   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:21.004439   38817 retry.go:31] will retry after 239.829275ms: waiting for machine to come up
	I0717 22:13:21.245924   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:21.246398   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:21.246415   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:21.246338   38817 retry.go:31] will retry after 392.369254ms: waiting for machine to come up
	I0717 22:13:21.639895   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:21.640403   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:21.640434   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:21.640361   38817 retry.go:31] will retry after 594.402893ms: waiting for machine to come up
	I0717 22:13:22.236235   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:22.236678   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:22.236702   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:22.236647   38817 retry.go:31] will retry after 671.698615ms: waiting for machine to come up
	I0717 22:13:22.909428   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:22.909959   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:22.909981   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:22.909911   38817 retry.go:31] will retry after 851.628616ms: waiting for machine to come up
	I0717 22:13:23.762919   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:23.763455   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:23.763481   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:23.763402   38817 retry.go:31] will retry after 887.07313ms: waiting for machine to come up
	I0717 22:13:24.652366   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:24.652904   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:24.652930   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:24.652867   38817 retry.go:31] will retry after 1.411964684s: waiting for machine to come up
	I0717 22:13:26.066748   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:26.067166   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:26.067201   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:26.067101   38817 retry.go:31] will retry after 1.293924629s: waiting for machine to come up
	I0717 22:13:27.362552   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:27.362974   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:27.362996   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:27.362932   38817 retry.go:31] will retry after 2.266233402s: waiting for machine to come up
	I0717 22:13:29.630705   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:29.631166   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:29.631209   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:29.631122   38817 retry.go:31] will retry after 1.768292577s: waiting for machine to come up
	I0717 22:13:31.401675   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:31.402162   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:31.402192   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:31.402101   38817 retry.go:31] will retry after 3.408388587s: waiting for machine to come up
	I0717 22:13:34.814846   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:34.815335   37994 main.go:141] libmachine: (multinode-009530) DBG | unable to find current IP address of domain multinode-009530 in network mk-multinode-009530
	I0717 22:13:34.815365   37994 main.go:141] libmachine: (multinode-009530) DBG | I0717 22:13:34.815272   38817 retry.go:31] will retry after 4.266542469s: waiting for machine to come up
	I0717 22:13:39.086076   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.086594   37994 main.go:141] libmachine: (multinode-009530) Found IP for machine: 192.168.39.222
	I0717 22:13:39.086623   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has current primary IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.086636   37994 main.go:141] libmachine: (multinode-009530) Reserving static IP address...
	I0717 22:13:39.087055   37994 main.go:141] libmachine: (multinode-009530) Reserved static IP address: 192.168.39.222
	I0717 22:13:39.087078   37994 main.go:141] libmachine: (multinode-009530) Waiting for SSH to be available...
	I0717 22:13:39.087103   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "multinode-009530", mac: "52:54:00:64:61:2c", ip: "192.168.39.222"} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:39.087127   37994 main.go:141] libmachine: (multinode-009530) DBG | skip adding static IP to network mk-multinode-009530 - found existing host DHCP lease matching {name: "multinode-009530", mac: "52:54:00:64:61:2c", ip: "192.168.39.222"}
	I0717 22:13:39.087136   37994 main.go:141] libmachine: (multinode-009530) DBG | Getting to WaitForSSH function...
	I0717 22:13:39.089276   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.089639   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:39.089678   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.089791   37994 main.go:141] libmachine: (multinode-009530) DBG | Using SSH client type: external
	I0717 22:13:39.089818   37994 main.go:141] libmachine: (multinode-009530) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa (-rw-------)
	I0717 22:13:39.089858   37994 main.go:141] libmachine: (multinode-009530) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:13:39.089878   37994 main.go:141] libmachine: (multinode-009530) DBG | About to run SSH command:
	I0717 22:13:39.089888   37994 main.go:141] libmachine: (multinode-009530) DBG | exit 0
	I0717 22:13:39.177267   37994 main.go:141] libmachine: (multinode-009530) DBG | SSH cmd err, output: <nil>: 
	I0717 22:13:39.177754   37994 main.go:141] libmachine: (multinode-009530) Calling .GetConfigRaw
	I0717 22:13:39.178537   37994 main.go:141] libmachine: (multinode-009530) Calling .GetIP
	I0717 22:13:39.181375   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.181775   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:39.181810   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.182096   37994 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/config.json ...
	I0717 22:13:39.182335   37994 machine.go:88] provisioning docker machine ...
	I0717 22:13:39.182358   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:13:39.182567   37994 main.go:141] libmachine: (multinode-009530) Calling .GetMachineName
	I0717 22:13:39.182734   37994 buildroot.go:166] provisioning hostname "multinode-009530"
	I0717 22:13:39.182758   37994 main.go:141] libmachine: (multinode-009530) Calling .GetMachineName
	I0717 22:13:39.182942   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:13:39.184873   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.185207   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:39.185242   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.185337   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:13:39.185487   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:13:39.185660   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:13:39.185787   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:13:39.185916   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:13:39.186330   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0717 22:13:39.186345   37994 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-009530 && echo "multinode-009530" | sudo tee /etc/hostname
	I0717 22:13:39.311390   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-009530
	
	I0717 22:13:39.311424   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:13:39.314335   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.314719   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:39.314747   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.315057   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:13:39.315246   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:13:39.315379   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:13:39.315500   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:13:39.315654   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:13:39.316039   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0717 22:13:39.316061   37994 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-009530' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-009530/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-009530' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:13:39.433448   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:13:39.433482   37994 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:13:39.433513   37994 buildroot.go:174] setting up certificates
	I0717 22:13:39.433555   37994 provision.go:83] configureAuth start
	I0717 22:13:39.433571   37994 main.go:141] libmachine: (multinode-009530) Calling .GetMachineName
	I0717 22:13:39.433835   37994 main.go:141] libmachine: (multinode-009530) Calling .GetIP
	I0717 22:13:39.436533   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.436889   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:39.436921   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.437103   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:13:39.439702   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.440034   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:39.440065   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.440199   37994 provision.go:138] copyHostCerts
	I0717 22:13:39.440229   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:13:39.440253   37994 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:13:39.440261   37994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:13:39.440349   37994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:13:39.440449   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:13:39.440477   37994 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:13:39.440490   37994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:13:39.440526   37994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:13:39.440595   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:13:39.440617   37994 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:13:39.440626   37994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:13:39.440657   37994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:13:39.440723   37994 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.multinode-009530 san=[192.168.39.222 192.168.39.222 localhost 127.0.0.1 minikube multinode-009530]
	I0717 22:13:39.791930   37994 provision.go:172] copyRemoteCerts
	I0717 22:13:39.791986   37994 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:13:39.792006   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:13:39.794879   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.795288   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:39.795321   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.795506   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:13:39.795747   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:13:39.795899   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:13:39.796033   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:13:39.879023   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 22:13:39.879096   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:13:39.905419   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 22:13:39.905493   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 22:13:39.929503   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 22:13:39.929587   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:13:39.953320   37994 provision.go:86] duration metric: configureAuth took 519.748577ms
	I0717 22:13:39.953347   37994 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:13:39.953615   37994 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:13:39.953718   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:13:39.956186   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.956551   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:39.956589   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:39.956772   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:13:39.956983   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:13:39.957153   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:13:39.957283   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:13:39.957437   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:13:39.957837   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0717 22:13:39.957854   37994 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:13:40.283233   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:13:40.283254   37994 machine.go:91] provisioned docker machine in 1.100905596s
	I0717 22:13:40.283262   37994 start.go:300] post-start starting for "multinode-009530" (driver="kvm2")
	I0717 22:13:40.283303   37994 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:13:40.283334   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:13:40.283621   37994 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:13:40.283644   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:13:40.286455   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:40.286851   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:40.286882   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:40.287063   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:13:40.287264   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:13:40.287439   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:13:40.287544   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:13:40.371861   37994 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:13:40.376200   37994 command_runner.go:130] > NAME=Buildroot
	I0717 22:13:40.376223   37994 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0717 22:13:40.376230   37994 command_runner.go:130] > ID=buildroot
	I0717 22:13:40.376238   37994 command_runner.go:130] > VERSION_ID=2021.02.12
	I0717 22:13:40.376245   37994 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0717 22:13:40.376546   37994 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:13:40.376575   37994 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:13:40.376675   37994 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:13:40.376761   37994 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:13:40.376771   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> /etc/ssl/certs/229902.pem
	I0717 22:13:40.376850   37994 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:13:40.385455   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:13:40.410632   37994 start.go:303] post-start completed in 127.355876ms
	I0717 22:13:40.410706   37994 fix.go:56] fixHost completed within 20.033470812s
	I0717 22:13:40.410735   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:13:40.413378   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:40.413900   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:40.413925   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:40.414102   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:13:40.414322   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:13:40.414514   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:13:40.414707   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:13:40.414895   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:13:40.415282   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I0717 22:13:40.415293   37994 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:13:40.527108   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689632020.473524754
	
	I0717 22:13:40.527130   37994 fix.go:206] guest clock: 1689632020.473524754
	I0717 22:13:40.527137   37994 fix.go:219] Guest: 2023-07-17 22:13:40.473524754 +0000 UTC Remote: 2023-07-17 22:13:40.410713196 +0000 UTC m=+318.143566353 (delta=62.811558ms)
	I0717 22:13:40.527158   37994 fix.go:190] guest clock delta is within tolerance: 62.811558ms
	I0717 22:13:40.527164   37994 start.go:83] releasing machines lock for "multinode-009530", held for 20.149996284s
	I0717 22:13:40.527185   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:13:40.527462   37994 main.go:141] libmachine: (multinode-009530) Calling .GetIP
	I0717 22:13:40.530584   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:40.530959   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:40.530993   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:40.531167   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:13:40.531686   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:13:40.531898   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:13:40.532005   37994 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:13:40.532058   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:13:40.532109   37994 ssh_runner.go:195] Run: cat /version.json
	I0717 22:13:40.532154   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:13:40.534894   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:40.534924   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:40.535341   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:40.535371   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:40.535398   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:40.535443   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:40.535556   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:13:40.535740   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:13:40.535750   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:13:40.535925   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:13:40.535937   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:13:40.536096   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:13:40.536102   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:13:40.536193   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:13:40.619378   37994 command_runner.go:130] > {"iso_version": "v1.31.0", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "be0194f682c2c37366eacb8c13503cb6c7a41cf8"}
	I0717 22:13:40.619520   37994 ssh_runner.go:195] Run: systemctl --version
	I0717 22:13:40.651520   37994 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 22:13:40.651573   37994 command_runner.go:130] > systemd 247 (247)
	I0717 22:13:40.651590   37994 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0717 22:13:40.651660   37994 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:13:40.801061   37994 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:13:40.807901   37994 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 22:13:40.807962   37994 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:13:40.808020   37994 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:13:40.824239   37994 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0717 22:13:40.824365   37994 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:13:40.824394   37994 start.go:466] detecting cgroup driver to use...
	I0717 22:13:40.824449   37994 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:13:40.838888   37994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:13:40.852073   37994 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:13:40.852136   37994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:13:40.865648   37994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:13:40.879742   37994 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:13:40.894595   37994 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0717 22:13:41.001079   37994 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:13:41.017618   37994 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0717 22:13:41.133058   37994 docker.go:212] disabling docker service ...
	I0717 22:13:41.133124   37994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:13:41.147104   37994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:13:41.159706   37994 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0717 22:13:41.159782   37994 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:13:41.173935   37994 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0717 22:13:41.272207   37994 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:13:41.285333   37994 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0717 22:13:41.285504   37994 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0717 22:13:41.387910   37994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:13:41.401470   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:13:41.419056   37994 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 22:13:41.419090   37994 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:13:41.419156   37994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:13:41.428545   37994 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:13:41.428602   37994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:13:41.437843   37994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:13:41.448520   37994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:13:41.458195   37994 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:13:41.467854   37994 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:13:41.475878   37994 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:13:41.475983   37994 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:13:41.476041   37994 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:13:41.487767   37994 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:13:41.497647   37994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:13:41.613680   37994 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:13:41.793106   37994 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:13:41.793184   37994 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:13:41.801247   37994 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 22:13:41.801266   37994 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 22:13:41.801272   37994 command_runner.go:130] > Device: 16h/22d	Inode: 739         Links: 1
	I0717 22:13:41.801278   37994 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:13:41.801283   37994 command_runner.go:130] > Access: 2023-07-17 22:13:41.723064079 +0000
	I0717 22:13:41.801288   37994 command_runner.go:130] > Modify: 2023-07-17 22:13:41.723064079 +0000
	I0717 22:13:41.801293   37994 command_runner.go:130] > Change: 2023-07-17 22:13:41.723064079 +0000
	I0717 22:13:41.801297   37994 command_runner.go:130] >  Birth: -
	I0717 22:13:41.801482   37994 start.go:534] Will wait 60s for crictl version
	I0717 22:13:41.801544   37994 ssh_runner.go:195] Run: which crictl
	I0717 22:13:41.805412   37994 command_runner.go:130] > /usr/bin/crictl
	I0717 22:13:41.805588   37994 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:13:41.836337   37994 command_runner.go:130] > Version:  0.1.0
	I0717 22:13:41.836365   37994 command_runner.go:130] > RuntimeName:  cri-o
	I0717 22:13:41.836371   37994 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0717 22:13:41.836379   37994 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0717 22:13:41.837986   37994 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:13:41.838068   37994 ssh_runner.go:195] Run: crio --version
	I0717 22:13:41.888491   37994 command_runner.go:130] > crio version 1.24.1
	I0717 22:13:41.888516   37994 command_runner.go:130] > Version:          1.24.1
	I0717 22:13:41.888522   37994 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 22:13:41.888527   37994 command_runner.go:130] > GitTreeState:     dirty
	I0717 22:13:41.888532   37994 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 22:13:41.888536   37994 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 22:13:41.888541   37994 command_runner.go:130] > Compiler:         gc
	I0717 22:13:41.888545   37994 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:13:41.888557   37994 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:13:41.888571   37994 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:13:41.888578   37994 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:13:41.888585   37994 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:13:41.888652   37994 ssh_runner.go:195] Run: crio --version
	I0717 22:13:41.933921   37994 command_runner.go:130] > crio version 1.24.1
	I0717 22:13:41.933942   37994 command_runner.go:130] > Version:          1.24.1
	I0717 22:13:41.933955   37994 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 22:13:41.933962   37994 command_runner.go:130] > GitTreeState:     dirty
	I0717 22:13:41.933970   37994 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 22:13:41.933976   37994 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 22:13:41.933982   37994 command_runner.go:130] > Compiler:         gc
	I0717 22:13:41.933989   37994 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:13:41.933997   37994 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:13:41.934009   37994 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:13:41.934016   37994 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:13:41.934021   37994 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:13:41.937647   37994 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:13:41.939188   37994 main.go:141] libmachine: (multinode-009530) Calling .GetIP
	I0717 22:13:41.941715   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:41.942055   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:13:41.942086   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:13:41.942266   37994 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 22:13:41.946672   37994 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:13:41.959327   37994 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:13:41.959398   37994 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:13:41.998170   37994 command_runner.go:130] > {
	I0717 22:13:41.998191   37994 command_runner.go:130] >   "images": [
	I0717 22:13:41.998195   37994 command_runner.go:130] >     {
	I0717 22:13:41.998201   37994 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 22:13:41.998213   37994 command_runner.go:130] >       "repoTags": [
	I0717 22:13:41.998219   37994 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 22:13:41.998229   37994 command_runner.go:130] >       ],
	I0717 22:13:41.998233   37994 command_runner.go:130] >       "repoDigests": [
	I0717 22:13:41.998244   37994 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 22:13:41.998255   37994 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 22:13:41.998261   37994 command_runner.go:130] >       ],
	I0717 22:13:41.998267   37994 command_runner.go:130] >       "size": "750414",
	I0717 22:13:41.998276   37994 command_runner.go:130] >       "uid": {
	I0717 22:13:41.998281   37994 command_runner.go:130] >         "value": "65535"
	I0717 22:13:41.998290   37994 command_runner.go:130] >       },
	I0717 22:13:41.998297   37994 command_runner.go:130] >       "username": "",
	I0717 22:13:41.998309   37994 command_runner.go:130] >       "spec": null
	I0717 22:13:41.998321   37994 command_runner.go:130] >     }
	I0717 22:13:41.998327   37994 command_runner.go:130] >   ]
	I0717 22:13:41.998332   37994 command_runner.go:130] > }
	I0717 22:13:41.998501   37994 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:13:41.998561   37994 ssh_runner.go:195] Run: which lz4
	I0717 22:13:42.002520   37994 command_runner.go:130] > /usr/bin/lz4
	I0717 22:13:42.002606   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 22:13:42.002705   37994 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:13:42.006876   37994 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:13:42.006963   37994 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:13:42.006993   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 22:13:43.779295   37994 crio.go:444] Took 1.776627 seconds to copy over tarball
	I0717 22:13:43.779379   37994 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:13:46.588415   37994 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.809005061s)
	I0717 22:13:46.588440   37994 crio.go:451] Took 2.809121 seconds to extract the tarball
	I0717 22:13:46.588448   37994 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:13:46.629469   37994 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:13:46.673061   37994 command_runner.go:130] > {
	I0717 22:13:46.673077   37994 command_runner.go:130] >   "images": [
	I0717 22:13:46.673081   37994 command_runner.go:130] >     {
	I0717 22:13:46.673089   37994 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0717 22:13:46.673093   37994 command_runner.go:130] >       "repoTags": [
	I0717 22:13:46.673099   37994 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0717 22:13:46.673102   37994 command_runner.go:130] >       ],
	I0717 22:13:46.673107   37994 command_runner.go:130] >       "repoDigests": [
	I0717 22:13:46.673117   37994 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0717 22:13:46.673134   37994 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0717 22:13:46.673143   37994 command_runner.go:130] >       ],
	I0717 22:13:46.673153   37994 command_runner.go:130] >       "size": "65249302",
	I0717 22:13:46.673161   37994 command_runner.go:130] >       "uid": null,
	I0717 22:13:46.673169   37994 command_runner.go:130] >       "username": "",
	I0717 22:13:46.673178   37994 command_runner.go:130] >       "spec": null
	I0717 22:13:46.673184   37994 command_runner.go:130] >     },
	I0717 22:13:46.673188   37994 command_runner.go:130] >     {
	I0717 22:13:46.673196   37994 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 22:13:46.673205   37994 command_runner.go:130] >       "repoTags": [
	I0717 22:13:46.673218   37994 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 22:13:46.673228   37994 command_runner.go:130] >       ],
	I0717 22:13:46.673237   37994 command_runner.go:130] >       "repoDigests": [
	I0717 22:13:46.673253   37994 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 22:13:46.673268   37994 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 22:13:46.673272   37994 command_runner.go:130] >       ],
	I0717 22:13:46.673276   37994 command_runner.go:130] >       "size": "31470524",
	I0717 22:13:46.673280   37994 command_runner.go:130] >       "uid": null,
	I0717 22:13:46.673296   37994 command_runner.go:130] >       "username": "",
	I0717 22:13:46.673308   37994 command_runner.go:130] >       "spec": null
	I0717 22:13:46.673313   37994 command_runner.go:130] >     },
	I0717 22:13:46.673319   37994 command_runner.go:130] >     {
	I0717 22:13:46.673329   37994 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0717 22:13:46.673339   37994 command_runner.go:130] >       "repoTags": [
	I0717 22:13:46.673348   37994 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0717 22:13:46.673357   37994 command_runner.go:130] >       ],
	I0717 22:13:46.673363   37994 command_runner.go:130] >       "repoDigests": [
	I0717 22:13:46.673380   37994 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0717 22:13:46.673397   37994 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0717 22:13:46.673407   37994 command_runner.go:130] >       ],
	I0717 22:13:46.673417   37994 command_runner.go:130] >       "size": "53621675",
	I0717 22:13:46.673426   37994 command_runner.go:130] >       "uid": null,
	I0717 22:13:46.673436   37994 command_runner.go:130] >       "username": "",
	I0717 22:13:46.673445   37994 command_runner.go:130] >       "spec": null
	I0717 22:13:46.673452   37994 command_runner.go:130] >     },
	I0717 22:13:46.673456   37994 command_runner.go:130] >     {
	I0717 22:13:46.673468   37994 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0717 22:13:46.673478   37994 command_runner.go:130] >       "repoTags": [
	I0717 22:13:46.673490   37994 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0717 22:13:46.673499   37994 command_runner.go:130] >       ],
	I0717 22:13:46.673509   37994 command_runner.go:130] >       "repoDigests": [
	I0717 22:13:46.673535   37994 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0717 22:13:46.673551   37994 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0717 22:13:46.673560   37994 command_runner.go:130] >       ],
	I0717 22:13:46.673570   37994 command_runner.go:130] >       "size": "297083935",
	I0717 22:13:46.673583   37994 command_runner.go:130] >       "uid": {
	I0717 22:13:46.673593   37994 command_runner.go:130] >         "value": "0"
	I0717 22:13:46.673606   37994 command_runner.go:130] >       },
	I0717 22:13:46.673615   37994 command_runner.go:130] >       "username": "",
	I0717 22:13:46.673625   37994 command_runner.go:130] >       "spec": null
	I0717 22:13:46.673635   37994 command_runner.go:130] >     },
	I0717 22:13:46.673643   37994 command_runner.go:130] >     {
	I0717 22:13:46.673657   37994 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0717 22:13:46.673666   37994 command_runner.go:130] >       "repoTags": [
	I0717 22:13:46.673677   37994 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0717 22:13:46.673685   37994 command_runner.go:130] >       ],
	I0717 22:13:46.673690   37994 command_runner.go:130] >       "repoDigests": [
	I0717 22:13:46.673715   37994 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0717 22:13:46.673734   37994 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0717 22:13:46.673740   37994 command_runner.go:130] >       ],
	I0717 22:13:46.673750   37994 command_runner.go:130] >       "size": "122065872",
	I0717 22:13:46.673759   37994 command_runner.go:130] >       "uid": {
	I0717 22:13:46.673768   37994 command_runner.go:130] >         "value": "0"
	I0717 22:13:46.673777   37994 command_runner.go:130] >       },
	I0717 22:13:46.673787   37994 command_runner.go:130] >       "username": "",
	I0717 22:13:46.673798   37994 command_runner.go:130] >       "spec": null
	I0717 22:13:46.673807   37994 command_runner.go:130] >     },
	I0717 22:13:46.673815   37994 command_runner.go:130] >     {
	I0717 22:13:46.673828   37994 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0717 22:13:46.673837   37994 command_runner.go:130] >       "repoTags": [
	I0717 22:13:46.673849   37994 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0717 22:13:46.673857   37994 command_runner.go:130] >       ],
	I0717 22:13:46.673861   37994 command_runner.go:130] >       "repoDigests": [
	I0717 22:13:46.673876   37994 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0717 22:13:46.673893   37994 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0717 22:13:46.673902   37994 command_runner.go:130] >       ],
	I0717 22:13:46.673912   37994 command_runner.go:130] >       "size": "113919286",
	I0717 22:13:46.673919   37994 command_runner.go:130] >       "uid": {
	I0717 22:13:46.673928   37994 command_runner.go:130] >         "value": "0"
	I0717 22:13:46.673936   37994 command_runner.go:130] >       },
	I0717 22:13:46.673944   37994 command_runner.go:130] >       "username": "",
	I0717 22:13:46.673952   37994 command_runner.go:130] >       "spec": null
	I0717 22:13:46.673962   37994 command_runner.go:130] >     },
	I0717 22:13:46.673971   37994 command_runner.go:130] >     {
	I0717 22:13:46.673985   37994 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0717 22:13:46.673995   37994 command_runner.go:130] >       "repoTags": [
	I0717 22:13:46.674005   37994 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0717 22:13:46.674014   37994 command_runner.go:130] >       ],
	I0717 22:13:46.674021   37994 command_runner.go:130] >       "repoDigests": [
	I0717 22:13:46.674032   37994 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0717 22:13:46.674046   37994 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0717 22:13:46.674056   37994 command_runner.go:130] >       ],
	I0717 22:13:46.674071   37994 command_runner.go:130] >       "size": "72713623",
	I0717 22:13:46.674081   37994 command_runner.go:130] >       "uid": null,
	I0717 22:13:46.674089   37994 command_runner.go:130] >       "username": "",
	I0717 22:13:46.674099   37994 command_runner.go:130] >       "spec": null
	I0717 22:13:46.674108   37994 command_runner.go:130] >     },
	I0717 22:13:46.674115   37994 command_runner.go:130] >     {
	I0717 22:13:46.674121   37994 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0717 22:13:46.674132   37994 command_runner.go:130] >       "repoTags": [
	I0717 22:13:46.674145   37994 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0717 22:13:46.674154   37994 command_runner.go:130] >       ],
	I0717 22:13:46.674161   37994 command_runner.go:130] >       "repoDigests": [
	I0717 22:13:46.674176   37994 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0717 22:13:46.674249   37994 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0717 22:13:46.674260   37994 command_runner.go:130] >       ],
	I0717 22:13:46.674267   37994 command_runner.go:130] >       "size": "59811126",
	I0717 22:13:46.674273   37994 command_runner.go:130] >       "uid": {
	I0717 22:13:46.674280   37994 command_runner.go:130] >         "value": "0"
	I0717 22:13:46.674286   37994 command_runner.go:130] >       },
	I0717 22:13:46.674293   37994 command_runner.go:130] >       "username": "",
	I0717 22:13:46.674299   37994 command_runner.go:130] >       "spec": null
	I0717 22:13:46.674308   37994 command_runner.go:130] >     },
	I0717 22:13:46.674317   37994 command_runner.go:130] >     {
	I0717 22:13:46.674330   37994 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 22:13:46.674339   37994 command_runner.go:130] >       "repoTags": [
	I0717 22:13:46.674350   37994 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 22:13:46.674363   37994 command_runner.go:130] >       ],
	I0717 22:13:46.674373   37994 command_runner.go:130] >       "repoDigests": [
	I0717 22:13:46.674386   37994 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 22:13:46.674399   37994 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 22:13:46.674406   37994 command_runner.go:130] >       ],
	I0717 22:13:46.674412   37994 command_runner.go:130] >       "size": "750414",
	I0717 22:13:46.674422   37994 command_runner.go:130] >       "uid": {
	I0717 22:13:46.674433   37994 command_runner.go:130] >         "value": "65535"
	I0717 22:13:46.674442   37994 command_runner.go:130] >       },
	I0717 22:13:46.674451   37994 command_runner.go:130] >       "username": "",
	I0717 22:13:46.674461   37994 command_runner.go:130] >       "spec": null
	I0717 22:13:46.674469   37994 command_runner.go:130] >     }
	I0717 22:13:46.674475   37994 command_runner.go:130] >   ]
	I0717 22:13:46.674482   37994 command_runner.go:130] > }
	I0717 22:13:46.674627   37994 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:13:46.674638   37994 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:13:46.674687   37994 ssh_runner.go:195] Run: crio config
	I0717 22:13:46.734333   37994 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 22:13:46.734366   37994 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 22:13:46.734379   37994 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 22:13:46.734384   37994 command_runner.go:130] > #
	I0717 22:13:46.734400   37994 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 22:13:46.734409   37994 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 22:13:46.734419   37994 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 22:13:46.734434   37994 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 22:13:46.734440   37994 command_runner.go:130] > # reload'.
	I0717 22:13:46.734450   37994 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 22:13:46.734464   37994 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 22:13:46.734474   37994 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 22:13:46.734488   37994 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 22:13:46.734496   37994 command_runner.go:130] > [crio]
	I0717 22:13:46.734506   37994 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 22:13:46.734518   37994 command_runner.go:130] > # containers images, in this directory.
	I0717 22:13:46.734526   37994 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 22:13:46.734545   37994 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 22:13:46.734556   37994 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 22:13:46.734570   37994 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 22:13:46.734578   37994 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 22:13:46.734583   37994 command_runner.go:130] > storage_driver = "overlay"
	I0717 22:13:46.734595   37994 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 22:13:46.734604   37994 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 22:13:46.734615   37994 command_runner.go:130] > storage_option = [
	I0717 22:13:46.734627   37994 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 22:13:46.734635   37994 command_runner.go:130] > ]
	I0717 22:13:46.734646   37994 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 22:13:46.734659   37994 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 22:13:46.734669   37994 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 22:13:46.734675   37994 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 22:13:46.734688   37994 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 22:13:46.734707   37994 command_runner.go:130] > # always happen on a node reboot
	I0717 22:13:46.734718   37994 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 22:13:46.734731   37994 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 22:13:46.734743   37994 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 22:13:46.734762   37994 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 22:13:46.734776   37994 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 22:13:46.734792   37994 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 22:13:46.734809   37994 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 22:13:46.734892   37994 command_runner.go:130] > # internal_wipe = true
	I0717 22:13:46.735164   37994 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 22:13:46.735193   37994 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 22:13:46.735219   37994 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 22:13:46.735233   37994 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 22:13:46.735243   37994 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 22:13:46.735249   37994 command_runner.go:130] > [crio.api]
	I0717 22:13:46.735262   37994 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 22:13:46.736097   37994 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 22:13:46.736120   37994 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 22:13:46.736128   37994 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 22:13:46.736146   37994 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 22:13:46.736155   37994 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 22:13:46.736162   37994 command_runner.go:130] > # stream_port = "0"
	I0717 22:13:46.736171   37994 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 22:13:46.736226   37994 command_runner.go:130] > # stream_enable_tls = false
	I0717 22:13:46.736239   37994 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 22:13:46.736247   37994 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 22:13:46.736263   37994 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 22:13:46.736277   37994 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 22:13:46.736284   37994 command_runner.go:130] > # minutes.
	I0717 22:13:46.736291   37994 command_runner.go:130] > # stream_tls_cert = ""
	I0717 22:13:46.736307   37994 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 22:13:46.736317   37994 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 22:13:46.736324   37994 command_runner.go:130] > # stream_tls_key = ""
	I0717 22:13:46.736339   37994 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 22:13:46.736350   37994 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 22:13:46.736365   37994 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 22:13:46.736394   37994 command_runner.go:130] > # stream_tls_ca = ""
	I0717 22:13:46.736423   37994 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:13:46.736442   37994 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 22:13:46.736479   37994 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:13:46.736491   37994 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 22:13:46.736542   37994 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 22:13:46.736577   37994 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 22:13:46.736592   37994 command_runner.go:130] > [crio.runtime]
	I0717 22:13:46.736620   37994 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 22:13:46.736676   37994 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 22:13:46.736934   37994 command_runner.go:130] > # "nofile=1024:2048"
	I0717 22:13:46.736945   37994 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 22:13:46.736953   37994 command_runner.go:130] > # default_ulimits = [
	I0717 22:13:46.736958   37994 command_runner.go:130] > # ]
	I0717 22:13:46.736967   37994 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 22:13:46.736985   37994 command_runner.go:130] > # no_pivot = false
	I0717 22:13:46.736996   37994 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 22:13:46.737008   37994 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 22:13:46.737019   37994 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 22:13:46.737031   37994 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 22:13:46.737046   37994 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 22:13:46.737059   37994 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:13:46.737069   37994 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 22:13:46.737079   37994 command_runner.go:130] > # Cgroup setting for conmon
	I0717 22:13:46.737092   37994 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 22:13:46.737102   37994 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 22:13:46.737115   37994 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 22:13:46.737125   37994 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 22:13:46.737139   37994 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:13:46.737148   37994 command_runner.go:130] > conmon_env = [
	I0717 22:13:46.737203   37994 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 22:13:46.737212   37994 command_runner.go:130] > ]
	I0717 22:13:46.737224   37994 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 22:13:46.737235   37994 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 22:13:46.737248   37994 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 22:13:46.737258   37994 command_runner.go:130] > # default_env = [
	I0717 22:13:46.737267   37994 command_runner.go:130] > # ]
	I0717 22:13:46.737279   37994 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 22:13:46.737288   37994 command_runner.go:130] > # selinux = false
	I0717 22:13:46.737300   37994 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 22:13:46.737312   37994 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 22:13:46.737327   37994 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 22:13:46.737341   37994 command_runner.go:130] > # seccomp_profile = ""
	I0717 22:13:46.737352   37994 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 22:13:46.737363   37994 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 22:13:46.737375   37994 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 22:13:46.737385   37994 command_runner.go:130] > # which might increase security.
	I0717 22:13:46.737395   37994 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 22:13:46.737408   37994 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 22:13:46.737420   37994 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 22:13:46.737433   37994 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 22:13:46.737445   37994 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 22:13:46.737457   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:13:46.737468   37994 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 22:13:46.737480   37994 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 22:13:46.737490   37994 command_runner.go:130] > # the cgroup blockio controller.
	I0717 22:13:46.737498   37994 command_runner.go:130] > # blockio_config_file = ""
	I0717 22:13:46.737508   37994 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 22:13:46.737527   37994 command_runner.go:130] > # irqbalance daemon.
	I0717 22:13:46.737536   37994 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 22:13:46.737549   37994 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 22:13:46.737560   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:13:46.737570   37994 command_runner.go:130] > # rdt_config_file = ""
	I0717 22:13:46.737582   37994 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 22:13:46.737591   37994 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 22:13:46.737601   37994 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 22:13:46.737610   37994 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 22:13:46.737624   37994 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 22:13:46.737638   37994 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 22:13:46.737648   37994 command_runner.go:130] > # will be added.
	I0717 22:13:46.737658   37994 command_runner.go:130] > # default_capabilities = [
	I0717 22:13:46.737667   37994 command_runner.go:130] > # 	"CHOWN",
	I0717 22:13:46.737675   37994 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 22:13:46.737679   37994 command_runner.go:130] > # 	"FSETID",
	I0717 22:13:46.737685   37994 command_runner.go:130] > # 	"FOWNER",
	I0717 22:13:46.737690   37994 command_runner.go:130] > # 	"SETGID",
	I0717 22:13:46.737696   37994 command_runner.go:130] > # 	"SETUID",
	I0717 22:13:46.737700   37994 command_runner.go:130] > # 	"SETPCAP",
	I0717 22:13:46.737706   37994 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 22:13:46.737710   37994 command_runner.go:130] > # 	"KILL",
	I0717 22:13:46.737716   37994 command_runner.go:130] > # ]
	I0717 22:13:46.737722   37994 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 22:13:46.737730   37994 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:13:46.737734   37994 command_runner.go:130] > # default_sysctls = [
	I0717 22:13:46.737737   37994 command_runner.go:130] > # ]
	I0717 22:13:46.737745   37994 command_runner.go:130] > # List of devices on the host that a
	I0717 22:13:46.737757   37994 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 22:13:46.737767   37994 command_runner.go:130] > # allowed_devices = [
	I0717 22:13:46.737776   37994 command_runner.go:130] > # 	"/dev/fuse",
	I0717 22:13:46.737785   37994 command_runner.go:130] > # ]
	I0717 22:13:46.737795   37994 command_runner.go:130] > # List of additional devices. specified as
	I0717 22:13:46.737811   37994 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 22:13:46.737821   37994 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 22:13:46.737863   37994 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:13:46.737875   37994 command_runner.go:130] > # additional_devices = [
	I0717 22:13:46.737880   37994 command_runner.go:130] > # ]
	I0717 22:13:46.737893   37994 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 22:13:46.737902   37994 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 22:13:46.737912   37994 command_runner.go:130] > # 	"/etc/cdi",
	I0717 22:13:46.737921   37994 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 22:13:46.737930   37994 command_runner.go:130] > # ]
	I0717 22:13:46.737940   37994 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 22:13:46.737949   37994 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 22:13:46.737956   37994 command_runner.go:130] > # Defaults to false.
	I0717 22:13:46.737968   37994 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 22:13:46.737982   37994 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 22:13:46.737996   37994 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 22:13:46.738006   37994 command_runner.go:130] > # hooks_dir = [
	I0717 22:13:46.738017   37994 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 22:13:46.738026   37994 command_runner.go:130] > # ]
	I0717 22:13:46.738039   37994 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 22:13:46.738053   37994 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 22:13:46.738065   37994 command_runner.go:130] > # its default mounts from the following two files:
	I0717 22:13:46.738074   37994 command_runner.go:130] > #
	I0717 22:13:46.738087   37994 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 22:13:46.738101   37994 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 22:13:46.738111   37994 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 22:13:46.738114   37994 command_runner.go:130] > #
	I0717 22:13:46.738122   37994 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 22:13:46.738137   37994 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 22:13:46.738150   37994 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 22:13:46.738194   37994 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 22:13:46.738211   37994 command_runner.go:130] > #
	I0717 22:13:46.738217   37994 command_runner.go:130] > # default_mounts_file = ""
	I0717 22:13:46.738225   37994 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 22:13:46.738238   37994 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 22:13:46.738248   37994 command_runner.go:130] > pids_limit = 1024
	I0717 22:13:46.738259   37994 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 22:13:46.738272   37994 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 22:13:46.738287   37994 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 22:13:46.738309   37994 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 22:13:46.738319   37994 command_runner.go:130] > # log_size_max = -1
	I0717 22:13:46.738333   37994 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 22:13:46.738340   37994 command_runner.go:130] > # log_to_journald = false
	I0717 22:13:46.738349   37994 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 22:13:46.738361   37994 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 22:13:46.738374   37994 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 22:13:46.738385   37994 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 22:13:46.738397   37994 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 22:13:46.738407   37994 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 22:13:46.738416   37994 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 22:13:46.738424   37994 command_runner.go:130] > # read_only = false
	I0717 22:13:46.738430   37994 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 22:13:46.738443   37994 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 22:13:46.738454   37994 command_runner.go:130] > # live configuration reload.
	I0717 22:13:46.738461   37994 command_runner.go:130] > # log_level = "info"
	I0717 22:13:46.738474   37994 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 22:13:46.738486   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:13:46.738499   37994 command_runner.go:130] > # log_filter = ""
	I0717 22:13:46.738512   37994 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 22:13:46.738524   37994 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 22:13:46.738531   37994 command_runner.go:130] > # separated by comma.
	I0717 22:13:46.738537   37994 command_runner.go:130] > # uid_mappings = ""
	I0717 22:13:46.738551   37994 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 22:13:46.738565   37994 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 22:13:46.738575   37994 command_runner.go:130] > # separated by comma.
	I0717 22:13:46.738584   37994 command_runner.go:130] > # gid_mappings = ""
	I0717 22:13:46.738595   37994 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 22:13:46.738607   37994 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:13:46.738616   37994 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:13:46.738623   37994 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 22:13:46.738637   37994 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 22:13:46.738650   37994 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:13:46.738663   37994 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:13:46.738673   37994 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 22:13:46.738684   37994 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 22:13:46.738699   37994 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 22:13:46.738710   37994 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 22:13:46.738721   37994 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 22:13:46.738735   37994 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 22:13:46.738747   37994 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 22:13:46.738759   37994 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 22:13:46.738771   37994 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 22:13:46.738781   37994 command_runner.go:130] > drop_infra_ctr = false
	I0717 22:13:46.738788   37994 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 22:13:46.738799   37994 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 22:13:46.738816   37994 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 22:13:46.738826   37994 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 22:13:46.738840   37994 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 22:13:46.738851   37994 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 22:13:46.738861   37994 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 22:13:46.738875   37994 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 22:13:46.738883   37994 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 22:13:46.738896   37994 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 22:13:46.738913   37994 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 22:13:46.738927   37994 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 22:13:46.738937   37994 command_runner.go:130] > # default_runtime = "runc"
	I0717 22:13:46.738949   37994 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 22:13:46.738963   37994 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 22:13:46.738981   37994 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 22:13:46.738993   37994 command_runner.go:130] > # creation as a file is not desired either.
	I0717 22:13:46.739010   37994 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 22:13:46.739030   37994 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 22:13:46.739040   37994 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 22:13:46.739045   37994 command_runner.go:130] > # ]
	I0717 22:13:46.739057   37994 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 22:13:46.739069   37994 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 22:13:46.739082   37994 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 22:13:46.739095   37994 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 22:13:46.739100   37994 command_runner.go:130] > #
	I0717 22:13:46.739111   37994 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 22:13:46.739121   37994 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 22:13:46.739135   37994 command_runner.go:130] > #  runtime_type = "oci"
	I0717 22:13:46.739145   37994 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 22:13:46.739155   37994 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 22:13:46.739164   37994 command_runner.go:130] > #  allowed_annotations = []
	I0717 22:13:46.739169   37994 command_runner.go:130] > # Where:
	I0717 22:13:46.739177   37994 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 22:13:46.739190   37994 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 22:13:46.739209   37994 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 22:13:46.739223   37994 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 22:13:46.739232   37994 command_runner.go:130] > #   in $PATH.
	I0717 22:13:46.739244   37994 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 22:13:46.739255   37994 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 22:13:46.739269   37994 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 22:13:46.739277   37994 command_runner.go:130] > #   state.
	I0717 22:13:46.739289   37994 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 22:13:46.739301   37994 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 22:13:46.739311   37994 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 22:13:46.739318   37994 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 22:13:46.739328   37994 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 22:13:46.739336   37994 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 22:13:46.739346   37994 command_runner.go:130] > #   The currently recognized values are:
	I0717 22:13:46.739352   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 22:13:46.739361   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 22:13:46.739369   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 22:13:46.739378   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 22:13:46.739387   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 22:13:46.739395   37994 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 22:13:46.739403   37994 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 22:13:46.739412   37994 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 22:13:46.739418   37994 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 22:13:46.739423   37994 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 22:13:46.739429   37994 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 22:13:46.739433   37994 command_runner.go:130] > runtime_type = "oci"
	I0717 22:13:46.739440   37994 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 22:13:46.739444   37994 command_runner.go:130] > runtime_config_path = ""
	I0717 22:13:46.739450   37994 command_runner.go:130] > monitor_path = ""
	I0717 22:13:46.739457   37994 command_runner.go:130] > monitor_cgroup = ""
	I0717 22:13:46.739465   37994 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 22:13:46.739472   37994 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 22:13:46.739478   37994 command_runner.go:130] > # running containers
	I0717 22:13:46.739482   37994 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 22:13:46.739488   37994 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 22:13:46.739545   37994 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 22:13:46.739555   37994 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 22:13:46.739562   37994 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 22:13:46.739567   37994 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 22:13:46.739573   37994 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 22:13:46.739578   37994 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 22:13:46.739585   37994 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 22:13:46.739589   37994 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 22:13:46.739597   37994 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 22:13:46.739602   37994 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 22:13:46.739611   37994 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 22:13:46.739618   37994 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 22:13:46.739630   37994 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 22:13:46.739638   37994 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 22:13:46.739647   37994 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 22:13:46.739657   37994 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 22:13:46.739662   37994 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 22:13:46.739674   37994 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 22:13:46.739681   37994 command_runner.go:130] > # Example:
	I0717 22:13:46.739686   37994 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 22:13:46.739695   37994 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 22:13:46.739702   37994 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 22:13:46.739710   37994 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 22:13:46.739718   37994 command_runner.go:130] > # cpuset = 0
	I0717 22:13:46.739725   37994 command_runner.go:130] > # cpushares = "0-1"
	I0717 22:13:46.739734   37994 command_runner.go:130] > # Where:
	I0717 22:13:46.739742   37994 command_runner.go:130] > # The workload name is workload-type.
	I0717 22:13:46.739754   37994 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 22:13:46.739762   37994 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 22:13:46.739770   37994 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 22:13:46.739781   37994 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 22:13:46.739789   37994 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 22:13:46.739795   37994 command_runner.go:130] > # 
	I0717 22:13:46.739801   37994 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 22:13:46.739806   37994 command_runner.go:130] > #
	I0717 22:13:46.739813   37994 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 22:13:46.739827   37994 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 22:13:46.739841   37994 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 22:13:46.739856   37994 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 22:13:46.739866   37994 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 22:13:46.739872   37994 command_runner.go:130] > [crio.image]
	I0717 22:13:46.739878   37994 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 22:13:46.739885   37994 command_runner.go:130] > # default_transport = "docker://"
	I0717 22:13:46.739890   37994 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 22:13:46.739899   37994 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:13:46.739905   37994 command_runner.go:130] > # global_auth_file = ""
	I0717 22:13:46.739910   37994 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 22:13:46.739922   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:13:46.739939   37994 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 22:13:46.739954   37994 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 22:13:46.739966   37994 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:13:46.739974   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:13:46.739978   37994 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 22:13:46.739988   37994 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 22:13:46.739996   37994 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 22:13:46.740002   37994 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 22:13:46.740030   37994 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 22:13:46.740040   37994 command_runner.go:130] > # pause_command = "/pause"
	I0717 22:13:46.740050   37994 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 22:13:46.740060   37994 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 22:13:46.740070   37994 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 22:13:46.740077   37994 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 22:13:46.740082   37994 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 22:13:46.740088   37994 command_runner.go:130] > # signature_policy = ""
	I0717 22:13:46.740098   37994 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 22:13:46.740113   37994 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 22:13:46.740127   37994 command_runner.go:130] > # changing them here.
	I0717 22:13:46.740137   37994 command_runner.go:130] > # insecure_registries = [
	I0717 22:13:46.740146   37994 command_runner.go:130] > # ]
	I0717 22:13:46.740157   37994 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 22:13:46.740165   37994 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 22:13:46.740174   37994 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 22:13:46.740187   37994 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 22:13:46.740202   37994 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 22:13:46.740213   37994 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 22:13:46.740219   37994 command_runner.go:130] > # CNI plugins.
	I0717 22:13:46.740226   37994 command_runner.go:130] > [crio.network]
	I0717 22:13:46.740235   37994 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 22:13:46.740243   37994 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 22:13:46.740248   37994 command_runner.go:130] > # cni_default_network = ""
	I0717 22:13:46.740254   37994 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 22:13:46.740261   37994 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 22:13:46.740271   37994 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 22:13:46.740278   37994 command_runner.go:130] > # plugin_dirs = [
	I0717 22:13:46.740287   37994 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 22:13:46.740293   37994 command_runner.go:130] > # ]
	I0717 22:13:46.740302   37994 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 22:13:46.740308   37994 command_runner.go:130] > [crio.metrics]
	I0717 22:13:46.740316   37994 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 22:13:46.740323   37994 command_runner.go:130] > enable_metrics = true
	I0717 22:13:46.740330   37994 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 22:13:46.740335   37994 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 22:13:46.740342   37994 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 22:13:46.740352   37994 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 22:13:46.740362   37994 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 22:13:46.740369   37994 command_runner.go:130] > # metrics_collectors = [
	I0717 22:13:46.740380   37994 command_runner.go:130] > # 	"operations",
	I0717 22:13:46.740393   37994 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 22:13:46.740404   37994 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 22:13:46.740416   37994 command_runner.go:130] > # 	"operations_errors",
	I0717 22:13:46.740422   37994 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 22:13:46.740428   37994 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 22:13:46.740442   37994 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 22:13:46.740452   37994 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 22:13:46.740460   37994 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 22:13:46.740471   37994 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 22:13:46.740481   37994 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 22:13:46.740491   37994 command_runner.go:130] > # 	"containers_oom_total",
	I0717 22:13:46.740501   37994 command_runner.go:130] > # 	"containers_oom",
	I0717 22:13:46.740510   37994 command_runner.go:130] > # 	"processes_defunct",
	I0717 22:13:46.740520   37994 command_runner.go:130] > # 	"operations_total",
	I0717 22:13:46.740527   37994 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 22:13:46.740534   37994 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 22:13:46.740545   37994 command_runner.go:130] > # 	"operations_errors_total",
	I0717 22:13:46.740556   37994 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 22:13:46.740564   37994 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 22:13:46.740575   37994 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 22:13:46.740585   37994 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 22:13:46.740595   37994 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 22:13:46.740605   37994 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 22:13:46.740613   37994 command_runner.go:130] > # ]
	I0717 22:13:46.740623   37994 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 22:13:46.740628   37994 command_runner.go:130] > # metrics_port = 9090
	I0717 22:13:46.740639   37994 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 22:13:46.740649   37994 command_runner.go:130] > # metrics_socket = ""
	I0717 22:13:46.740659   37994 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 22:13:46.740672   37994 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 22:13:46.740687   37994 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 22:13:46.740697   37994 command_runner.go:130] > # certificate on any modification event.
	I0717 22:13:46.740707   37994 command_runner.go:130] > # metrics_cert = ""
	I0717 22:13:46.740715   37994 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 22:13:46.740723   37994 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 22:13:46.740728   37994 command_runner.go:130] > # metrics_key = ""
	I0717 22:13:46.740738   37994 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 22:13:46.740748   37994 command_runner.go:130] > [crio.tracing]
	I0717 22:13:46.740757   37994 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 22:13:46.740835   37994 command_runner.go:130] > # enable_tracing = false
	I0717 22:13:46.740869   37994 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 22:13:46.740894   37994 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 22:13:46.740909   37994 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 22:13:46.740922   37994 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 22:13:46.740937   37994 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 22:13:46.740951   37994 command_runner.go:130] > [crio.stats]
	I0717 22:13:46.740965   37994 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 22:13:46.740983   37994 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 22:13:46.740996   37994 command_runner.go:130] > # stats_collection_period = 0
	I0717 22:13:46.741037   37994 command_runner.go:130] ! time="2023-07-17 22:13:46.678499366Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0717 22:13:46.741057   37994 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 22:13:46.741151   37994 cni.go:84] Creating CNI manager for ""
	I0717 22:13:46.741168   37994 cni.go:137] 3 nodes found, recommending kindnet
	I0717 22:13:46.741184   37994 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:13:46.741209   37994 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-009530 NodeName:multinode-009530 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:13:46.741372   37994 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-009530"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:13:46.741456   37994 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-009530 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-009530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:13:46.741539   37994 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:13:46.751431   37994 command_runner.go:130] > kubeadm
	I0717 22:13:46.751460   37994 command_runner.go:130] > kubectl
	I0717 22:13:46.751467   37994 command_runner.go:130] > kubelet
	I0717 22:13:46.751494   37994 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:13:46.751550   37994 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:13:46.761713   37994 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0717 22:13:46.779249   37994 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:13:46.796439   37994 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0717 22:13:46.814584   37994 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0717 22:13:46.818583   37994 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:13:46.830857   37994 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530 for IP: 192.168.39.222
	I0717 22:13:46.830886   37994 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:13:46.831066   37994 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:13:46.831128   37994 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:13:46.831192   37994 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key
	I0717 22:13:46.831247   37994 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.key.ac9b12d1
	I0717 22:13:46.831293   37994 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.key
	I0717 22:13:46.831303   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 22:13:46.831318   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 22:13:46.831331   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 22:13:46.831348   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 22:13:46.831367   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 22:13:46.831380   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 22:13:46.831392   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 22:13:46.831403   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 22:13:46.831453   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:13:46.831485   37994 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:13:46.831495   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:13:46.831522   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:13:46.831558   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:13:46.831580   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:13:46.831623   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:13:46.831662   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem -> /usr/share/ca-certificates/22990.pem
	I0717 22:13:46.831684   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> /usr/share/ca-certificates/229902.pem
	I0717 22:13:46.831701   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:13:46.832230   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:13:46.856067   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:13:46.878941   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:13:46.901358   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:13:46.924188   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:13:46.946710   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:13:46.969613   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:13:46.992664   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:13:47.016377   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:13:47.039575   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:13:47.063133   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:13:47.086164   37994 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:13:47.105183   37994 ssh_runner.go:195] Run: openssl version
	I0717 22:13:47.110649   37994 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0717 22:13:47.111044   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:13:47.125032   37994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:13:47.130084   37994 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:13:47.130113   37994 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:13:47.130156   37994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:13:47.135963   37994 command_runner.go:130] > 3ec20f2e
	I0717 22:13:47.136197   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:13:47.147263   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:13:47.158481   37994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:13:47.163403   37994 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:13:47.163611   37994 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:13:47.163675   37994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:13:47.169269   37994 command_runner.go:130] > b5213941
	I0717 22:13:47.169400   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:13:47.180376   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:13:47.192098   37994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:13:47.197074   37994 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:13:47.197105   37994 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:13:47.197147   37994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:13:47.202809   37994 command_runner.go:130] > 51391683
	I0717 22:13:47.203137   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:13:47.214428   37994 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:13:47.219168   37994 command_runner.go:130] > ca.crt
	I0717 22:13:47.219183   37994 command_runner.go:130] > ca.key
	I0717 22:13:47.219188   37994 command_runner.go:130] > healthcheck-client.crt
	I0717 22:13:47.219192   37994 command_runner.go:130] > healthcheck-client.key
	I0717 22:13:47.219197   37994 command_runner.go:130] > peer.crt
	I0717 22:13:47.219203   37994 command_runner.go:130] > peer.key
	I0717 22:13:47.219214   37994 command_runner.go:130] > server.crt
	I0717 22:13:47.219221   37994 command_runner.go:130] > server.key
	I0717 22:13:47.219277   37994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:13:47.225321   37994 command_runner.go:130] > Certificate will not expire
	I0717 22:13:47.225656   37994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:13:47.232755   37994 command_runner.go:130] > Certificate will not expire
	I0717 22:13:47.232818   37994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:13:47.238719   37994 command_runner.go:130] > Certificate will not expire
	I0717 22:13:47.239136   37994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:13:47.245019   37994 command_runner.go:130] > Certificate will not expire
	I0717 22:13:47.245137   37994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:13:47.250849   37994 command_runner.go:130] > Certificate will not expire
	I0717 22:13:47.250921   37994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:13:47.257412   37994 command_runner.go:130] > Certificate will not expire
	I0717 22:13:47.257565   37994 kubeadm.go:404] StartCluster: {Name:multinode-009530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-009530 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.146 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.205 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fals
e istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0}
	I0717 22:13:47.257696   37994 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:13:47.257762   37994 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:13:47.292575   37994 cri.go:89] found id: ""
	I0717 22:13:47.292645   37994 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:13:47.303585   37994 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0717 22:13:47.389333   37994 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0717 22:13:47.389356   37994 command_runner.go:130] > /var/lib/minikube/etcd:
	I0717 22:13:47.389362   37994 command_runner.go:130] > member
	I0717 22:13:47.389391   37994 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:13:47.389405   37994 kubeadm.go:636] restartCluster start
	I0717 22:13:47.389460   37994 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:13:47.401638   37994 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:47.402201   37994 kubeconfig.go:92] found "multinode-009530" server: "https://192.168.39.222:8443"
	I0717 22:13:47.402659   37994 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:13:47.402913   37994 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:13:47.403582   37994 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 22:13:47.403757   37994 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:13:47.414746   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:47.414806   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:47.428454   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:47.929547   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:47.929628   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:47.944021   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:48.429566   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:48.429646   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:48.441951   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:48.928530   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:48.928647   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:48.942338   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:49.428882   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:49.428966   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:49.441178   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:49.928804   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:49.928891   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:49.941391   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:50.429489   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:50.429585   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:50.441810   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:50.929468   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:50.929593   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:50.943691   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:51.429362   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:51.429430   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:51.441749   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:51.929410   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:51.929473   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:51.942050   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:52.428758   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:52.428826   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:52.441187   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:52.929557   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:52.929630   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:52.942561   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:53.429206   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:53.429297   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:53.442528   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:53.929295   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:53.929359   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:53.941425   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:54.429002   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:54.429067   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:54.442152   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:54.928705   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:54.928799   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:54.940991   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:55.429138   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:55.429210   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:55.441476   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:55.928671   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:55.928753   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:55.941020   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:56.428548   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:56.428636   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:56.440801   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:56.929418   37994 api_server.go:166] Checking apiserver status ...
	I0717 22:13:56.929497   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:13:56.942004   37994 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:13:57.415650   37994 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:13:57.415680   37994 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:13:57.415692   37994 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:13:57.415744   37994 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:13:57.446688   37994 cri.go:89] found id: ""
	I0717 22:13:57.446770   37994 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:13:57.462224   37994 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:13:57.470867   37994 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0717 22:13:57.470888   37994 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0717 22:13:57.470895   37994 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0717 22:13:57.470902   37994 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:13:57.470938   37994 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:13:57.470981   37994 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:13:57.479797   37994 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:13:57.479824   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:13:57.603437   37994 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:13:57.603463   37994 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0717 22:13:57.603473   37994 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0717 22:13:57.603482   37994 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:13:57.603499   37994 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0717 22:13:57.603509   37994 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:13:57.603518   37994 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0717 22:13:57.603527   37994 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0717 22:13:57.603538   37994 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:13:57.603549   37994 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:13:57.603560   37994 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:13:57.603579   37994 command_runner.go:130] > [certs] Using the existing "sa" key
	I0717 22:13:57.603605   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:13:57.658789   37994 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:13:57.837602   37994 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:13:57.947985   37994 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:13:58.147194   37994 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:13:58.323522   37994 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:13:58.327488   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:13:58.531803   37994 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:13:58.531833   37994 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:13:58.531839   37994 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 22:13:58.531861   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:13:58.619350   37994 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:13:58.619383   37994 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:13:58.619395   37994 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:13:58.619406   37994 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:13:58.619456   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:13:58.686570   37994 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:13:58.686815   37994 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:13:58.686889   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:13:59.201930   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:13:59.701957   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:14:00.202010   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:14:00.701534   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:14:00.728784   37994 command_runner.go:130] > 1071
	I0717 22:14:00.728970   37994 api_server.go:72] duration metric: took 2.042168856s to wait for apiserver process to appear ...
	I0717 22:14:00.728984   37994 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:14:00.729000   37994 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I0717 22:14:00.729463   37994 api_server.go:269] stopped: https://192.168.39.222:8443/healthz: Get "https://192.168.39.222:8443/healthz": dial tcp 192.168.39.222:8443: connect: connection refused
	I0717 22:14:01.230263   37994 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I0717 22:14:04.938241   37994 api_server.go:279] https://192.168.39.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:14:04.938270   37994 api_server.go:103] status: https://192.168.39.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:14:04.938282   37994 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I0717 22:14:05.005114   37994 api_server.go:279] https://192.168.39.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:14:05.005144   37994 api_server.go:103] status: https://192.168.39.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:14:05.230522   37994 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I0717 22:14:05.237044   37994 api_server.go:279] https://192.168.39.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:14:05.237077   37994 api_server.go:103] status: https://192.168.39.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:14:05.730269   37994 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I0717 22:14:05.735559   37994 api_server.go:279] https://192.168.39.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:14:05.735596   37994 api_server.go:103] status: https://192.168.39.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:14:06.230223   37994 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I0717 22:14:06.235551   37994 api_server.go:279] https://192.168.39.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:14:06.235584   37994 api_server.go:103] status: https://192.168.39.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:14:06.730229   37994 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I0717 22:14:06.735938   37994 api_server.go:279] https://192.168.39.222:8443/healthz returned 200:
	ok
	I0717 22:14:06.736006   37994 round_trippers.go:463] GET https://192.168.39.222:8443/version
	I0717 22:14:06.736011   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:06.736020   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:06.736026   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:06.746887   37994 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 22:14:06.746906   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:06.746912   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:06.746918   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:06.746924   37994 round_trippers.go:580]     Content-Length: 263
	I0717 22:14:06.746929   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:06 GMT
	I0717 22:14:06.746934   37994 round_trippers.go:580]     Audit-Id: 94e250f1-7258-4967-821d-4e29cf0ce304
	I0717 22:14:06.746939   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:06.746945   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:06.746962   37994 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 22:14:06.747022   37994 api_server.go:141] control plane version: v1.27.3
	I0717 22:14:06.747037   37994 api_server.go:131] duration metric: took 6.01804904s to wait for apiserver health ...
	I0717 22:14:06.747046   37994 cni.go:84] Creating CNI manager for ""
	I0717 22:14:06.747055   37994 cni.go:137] 3 nodes found, recommending kindnet
	I0717 22:14:06.749008   37994 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 22:14:06.750485   37994 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 22:14:06.758269   37994 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 22:14:06.758291   37994 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0717 22:14:06.758301   37994 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0717 22:14:06.758314   37994 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:14:06.758327   37994 command_runner.go:130] > Access: 2023-07-17 22:13:32.496064079 +0000
	I0717 22:14:06.758338   37994 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0717 22:14:06.758349   37994 command_runner.go:130] > Change: 2023-07-17 22:13:30.473064079 +0000
	I0717 22:14:06.758357   37994 command_runner.go:130] >  Birth: -
	I0717 22:14:06.758704   37994 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 22:14:06.758719   37994 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 22:14:06.791686   37994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 22:14:08.167426   37994 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 22:14:08.172604   37994 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 22:14:08.175865   37994 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 22:14:08.196676   37994 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 22:14:08.200097   37994 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.408379904s)
	I0717 22:14:08.200121   37994 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:14:08.200183   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:14:08.200191   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.200198   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.200205   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.204386   37994 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:14:08.204404   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.204411   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.204420   37994 round_trippers.go:580]     Audit-Id: e954a44f-e60f-43e2-b9a6-754216d209d5
	I0717 22:14:08.204425   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.204433   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.204441   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.204446   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.206711   37994 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"821"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"742","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82898 chars]
	I0717 22:14:08.210527   37994 system_pods.go:59] 12 kube-system pods found
	I0717 22:14:08.210562   37994 system_pods.go:61] "coredns-5d78c9869d-z4fr8" [1fb1d992-a7b6-4259-ba61-dc4092c65c44] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:14:08.210573   37994 system_pods.go:61] "etcd-multinode-009530" [aed75ad9-0156-4275-8a41-b68d09c15660] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:14:08.210582   37994 system_pods.go:61] "kindnet-4tb65" [da2b2174-4ab2-4dc9-99ba-16cc00b0c7f2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 22:14:08.210591   37994 system_pods.go:61] "kindnet-gh4hn" [d474f5c5-bd94-411b-8d69-b3871c2b5653] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 22:14:08.210606   37994 system_pods.go:61] "kindnet-zldcf" [faa5128f-071f-485e-958c-f3c4222704da] Running
	I0717 22:14:08.210612   37994 system_pods.go:61] "kube-apiserver-multinode-009530" [958b1550-f15f-49f3-acf3-dbab69f82fb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:14:08.210625   37994 system_pods.go:61] "kube-controller-manager-multinode-009530" [1c9dba7c-6497-41f0-b751-17988278c710] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:14:08.210632   37994 system_pods.go:61] "kube-proxy-6rxv8" [0d197eb7-b5bd-446a-b2f4-c513c06afcbe] Running
	I0717 22:14:08.210638   37994 system_pods.go:61] "kube-proxy-jv9h4" [f3b140d5-ec70-4ffe-8372-7fb67d0fb0c9] Running
	I0717 22:14:08.210646   37994 system_pods.go:61] "kube-proxy-m5spw" [a4bf0eb3-126a-463e-a670-b4793e1c5ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:14:08.210656   37994 system_pods.go:61] "kube-scheduler-multinode-009530" [5da85194-923d-40f6-ab44-86209b1f057d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:14:08.210663   37994 system_pods.go:61] "storage-provisioner" [d8f48e9c-2b37-4edc-89e4-d032cac0d573] Running
	I0717 22:14:08.210668   37994 system_pods.go:74] duration metric: took 10.542474ms to wait for pod list to return data ...
	I0717 22:14:08.210676   37994 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:14:08.210725   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes
	I0717 22:14:08.210732   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.210739   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.210745   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.213416   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:08.213433   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.213440   37994 round_trippers.go:580]     Audit-Id: 679a8468-bca9-4362-bbcd-d38353a431cb
	I0717 22:14:08.213446   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.213452   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.213457   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.213466   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.213472   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.213908   37994 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"821"},"items":[{"metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"734","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15252 chars]
	I0717 22:14:08.214615   37994 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:14:08.214636   37994 node_conditions.go:123] node cpu capacity is 2
	I0717 22:14:08.214644   37994 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:14:08.214648   37994 node_conditions.go:123] node cpu capacity is 2
	I0717 22:14:08.214651   37994 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:14:08.214655   37994 node_conditions.go:123] node cpu capacity is 2
	I0717 22:14:08.214658   37994 node_conditions.go:105] duration metric: took 3.978711ms to run NodePressure ...
	I0717 22:14:08.214672   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:14:08.456554   37994 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0717 22:14:08.456581   37994 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0717 22:14:08.456606   37994 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:14:08.456676   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0717 22:14:08.456684   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.456691   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.456697   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.464772   37994 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 22:14:08.464792   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.464799   37994 round_trippers.go:580]     Audit-Id: fa72aa96-1741-4282-b3f3-b76f8e061e2e
	I0717 22:14:08.464805   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.464810   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.464816   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.464821   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.464826   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.469162   37994 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"824"},"items":[{"metadata":{"name":"etcd-multinode-009530","namespace":"kube-system","uid":"aed75ad9-0156-4275-8a41-b68d09c15660","resourceVersion":"738","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.222:2379","kubernetes.io/config.hash":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.mirror":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.seen":"2023-07-17T22:03:52.473671520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0717 22:14:08.470107   37994 kubeadm.go:787] kubelet initialised
	I0717 22:14:08.470123   37994 kubeadm.go:788] duration metric: took 13.509406ms waiting for restarted kubelet to initialise ...
	I0717 22:14:08.470130   37994 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:14:08.470180   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:14:08.470188   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.470196   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.470202   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.473959   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:08.473989   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.474002   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.474011   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.474019   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.474028   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.474040   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.474049   37994 round_trippers.go:580]     Audit-Id: 819f6140-5c01-4f4f-8766-08ba7e9a1df5
	I0717 22:14:08.474825   37994 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"824"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"742","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82898 chars]
	I0717 22:14:08.478008   37994 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:08.478104   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:14:08.478123   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.478133   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.478144   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.484586   37994 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 22:14:08.484605   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.484612   37994 round_trippers.go:580]     Audit-Id: d81e77fc-3001-448a-83c3-83a827049ee6
	I0717 22:14:08.484617   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.484622   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.484628   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.484633   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.484638   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.485850   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"742","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 22:14:08.486271   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:08.486284   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.486291   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.486297   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.490860   37994 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:14:08.490881   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.490892   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.490901   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.490910   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.490920   37994 round_trippers.go:580]     Audit-Id: 4a73bf7e-00ee-4dfa-ad4f-4de7853dbbb7
	I0717 22:14:08.490929   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.490941   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.491105   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"734","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 22:14:08.491430   37994 pod_ready.go:97] node "multinode-009530" hosting pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-009530" has status "Ready":"False"
	I0717 22:14:08.491446   37994 pod_ready.go:81] duration metric: took 13.412784ms waiting for pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace to be "Ready" ...
	E0717 22:14:08.491454   37994 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-009530" hosting pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-009530" has status "Ready":"False"
	I0717 22:14:08.491461   37994 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:08.491519   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-009530
	I0717 22:14:08.491528   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.491535   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.491540   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.495564   37994 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:14:08.495585   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.495595   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.495603   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.495611   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.495618   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.495627   37994 round_trippers.go:580]     Audit-Id: 16de0c75-5134-40e9-9ad7-97ac9c89e75b
	I0717 22:14:08.495634   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.495797   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-009530","namespace":"kube-system","uid":"aed75ad9-0156-4275-8a41-b68d09c15660","resourceVersion":"738","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.222:2379","kubernetes.io/config.hash":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.mirror":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.seen":"2023-07-17T22:03:52.473671520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0717 22:14:08.496286   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:08.496300   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.496311   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.496321   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.501858   37994 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 22:14:08.501873   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.501879   37994 round_trippers.go:580]     Audit-Id: 7c06a236-0273-4e41-a06f-4f84b92e592e
	I0717 22:14:08.501885   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.501890   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.501896   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.501901   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.501906   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.502463   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"734","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 22:14:08.502810   37994 pod_ready.go:97] node "multinode-009530" hosting pod "etcd-multinode-009530" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-009530" has status "Ready":"False"
	I0717 22:14:08.502832   37994 pod_ready.go:81] duration metric: took 11.362585ms waiting for pod "etcd-multinode-009530" in "kube-system" namespace to be "Ready" ...
	E0717 22:14:08.502841   37994 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-009530" hosting pod "etcd-multinode-009530" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-009530" has status "Ready":"False"
	I0717 22:14:08.502857   37994 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:08.502922   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-009530
	I0717 22:14:08.502930   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.502938   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.502948   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.510488   37994 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 22:14:08.510512   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.510523   37994 round_trippers.go:580]     Audit-Id: 7f53b8d7-2f40-4b87-8ac4-002ec0286f30
	I0717 22:14:08.510531   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.510539   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.510547   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.510554   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.510562   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.511000   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-009530","namespace":"kube-system","uid":"958b1550-f15f-49f3-acf3-dbab69f82fb8","resourceVersion":"739","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.222:8443","kubernetes.io/config.hash":"49e7615bd1aa66d6e32161e120c48180","kubernetes.io/config.mirror":"49e7615bd1aa66d6e32161e120c48180","kubernetes.io/config.seen":"2023-07-17T22:03:52.473675304Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0717 22:14:08.511467   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:08.511482   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.511493   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.511503   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.514377   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:08.514397   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.514407   37994 round_trippers.go:580]     Audit-Id: a216981b-6b39-4916-871e-7cf7342285e8
	I0717 22:14:08.514416   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.514426   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.514434   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.514443   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.514456   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.514737   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"734","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 22:14:08.515111   37994 pod_ready.go:97] node "multinode-009530" hosting pod "kube-apiserver-multinode-009530" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-009530" has status "Ready":"False"
	I0717 22:14:08.515130   37994 pod_ready.go:81] duration metric: took 12.257625ms waiting for pod "kube-apiserver-multinode-009530" in "kube-system" namespace to be "Ready" ...
	E0717 22:14:08.515141   37994 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-009530" hosting pod "kube-apiserver-multinode-009530" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-009530" has status "Ready":"False"
	I0717 22:14:08.515151   37994 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:08.515200   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-009530
	I0717 22:14:08.515211   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.515222   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.515241   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.518154   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:08.518173   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.518183   37994 round_trippers.go:580]     Audit-Id: 74b6a649-97fe-44f9-9570-eebc3a3b8ccf
	I0717 22:14:08.518192   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.518200   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.518211   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.518219   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.518228   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.518454   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-009530","namespace":"kube-system","uid":"1c9dba7c-6497-41f0-b751-17988278c710","resourceVersion":"740","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d8b61663949a18745a23bcf487c538f2","kubernetes.io/config.mirror":"d8b61663949a18745a23bcf487c538f2","kubernetes.io/config.seen":"2023-07-17T22:03:52.473676600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0717 22:14:08.600994   37994 request.go:628] Waited for 82.16648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:08.601056   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:08.601065   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.601078   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.601090   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.604083   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:08.604103   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.604110   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.604116   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.604121   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.604130   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.604138   37994 round_trippers.go:580]     Audit-Id: 40c3be03-8211-4a78-96e0-859f4dc53082
	I0717 22:14:08.604151   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.604425   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"734","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 22:14:08.604838   37994 pod_ready.go:97] node "multinode-009530" hosting pod "kube-controller-manager-multinode-009530" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-009530" has status "Ready":"False"
	I0717 22:14:08.604858   37994 pod_ready.go:81] duration metric: took 89.698241ms waiting for pod "kube-controller-manager-multinode-009530" in "kube-system" namespace to be "Ready" ...
	E0717 22:14:08.604873   37994 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-009530" hosting pod "kube-controller-manager-multinode-009530" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-009530" has status "Ready":"False"
	I0717 22:14:08.604886   37994 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6rxv8" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:08.800300   37994 request.go:628] Waited for 195.319993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rxv8
	I0717 22:14:08.800375   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rxv8
	I0717 22:14:08.800382   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:08.800393   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:08.800403   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:08.803069   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:08.803096   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:08.803103   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:08.803110   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:08.803116   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:08.803122   37994 round_trippers.go:580]     Audit-Id: 1fd1808d-bcca-43f8-83da-a8d0b2ee1b77
	I0717 22:14:08.803127   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:08.803132   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:08.803278   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6rxv8","generateName":"kube-proxy-","namespace":"kube-system","uid":"0d197eb7-b5bd-446a-b2f4-c513c06afcbe","resourceVersion":"512","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0717 22:14:09.001127   37994 request.go:628] Waited for 197.411922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:14:09.001209   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:14:09.001215   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:09.001223   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:09.001229   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:09.003639   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:09.003659   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:09.003666   37994 round_trippers.go:580]     Audit-Id: 8d0611fc-bce4-4a88-82aa-9f3ddb6414aa
	I0717 22:14:09.003672   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:09.003677   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:09.003682   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:09.003687   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:09.003693   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:08 GMT
	I0717 22:14:09.003941   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"731","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I0717 22:14:09.004307   37994 pod_ready.go:92] pod "kube-proxy-6rxv8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:14:09.004328   37994 pod_ready.go:81] duration metric: took 399.422602ms waiting for pod "kube-proxy-6rxv8" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:09.004341   37994 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jv9h4" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:09.200653   37994 request.go:628] Waited for 196.249108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jv9h4
	I0717 22:14:09.200702   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jv9h4
	I0717 22:14:09.200708   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:09.200716   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:09.200734   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:09.203691   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:09.203718   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:09.203728   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:09.203746   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:09.203755   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:09.203768   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:09.203779   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:09 GMT
	I0717 22:14:09.203799   37994 round_trippers.go:580]     Audit-Id: c2e05c43-3846-4e45-bc2d-0e6aad6afd48
	I0717 22:14:09.204052   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jv9h4","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3b140d5-ec70-4ffe-8372-7fb67d0fb0c9","resourceVersion":"711","creationTimestamp":"2023-07-17T22:05:32Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:05:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0717 22:14:09.400966   37994 request.go:628] Waited for 196.408426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m03
	I0717 22:14:09.401040   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m03
	I0717 22:14:09.401046   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:09.401054   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:09.401065   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:09.404257   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:09.404283   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:09.404292   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:09 GMT
	I0717 22:14:09.404297   37994 round_trippers.go:580]     Audit-Id: 4c173ebe-d281-4274-a08d-bee326f213d7
	I0717 22:14:09.404303   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:09.404308   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:09.404313   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:09.404318   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:09.404434   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m03","uid":"cadf8157-0bcb-4971-8496-da993f9c43bf","resourceVersion":"818","creationTimestamp":"2023-07-17T22:06:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:06:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0717 22:14:09.404777   37994 pod_ready.go:92] pod "kube-proxy-jv9h4" in "kube-system" namespace has status "Ready":"True"
	I0717 22:14:09.404797   37994 pod_ready.go:81] duration metric: took 400.44682ms waiting for pod "kube-proxy-jv9h4" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:09.404809   37994 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m5spw" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:09.601288   37994 request.go:628] Waited for 196.407461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5spw
	I0717 22:14:09.601363   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5spw
	I0717 22:14:09.601370   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:09.601380   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:09.601394   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:09.604640   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:09.604661   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:09.604668   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:09 GMT
	I0717 22:14:09.604674   37994 round_trippers.go:580]     Audit-Id: a685e972-f5ad-434b-ad39-90e3bd4d31de
	I0717 22:14:09.604679   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:09.604685   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:09.604692   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:09.604697   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:09.604798   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m5spw","generateName":"kube-proxy-","namespace":"kube-system","uid":"a4bf0eb3-126a-463e-a670-b4793e1c5ec9","resourceVersion":"825","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 22:14:09.800711   37994 request.go:628] Waited for 195.424036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:09.800796   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:09.800805   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:09.800817   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:09.800827   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:09.806038   37994 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 22:14:09.806061   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:09.806068   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:09.806073   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:09.806079   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:09.806085   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:09 GMT
	I0717 22:14:09.806090   37994 round_trippers.go:580]     Audit-Id: 45a2cec7-e9c6-449a-a18b-46384e7b0277
	I0717 22:14:09.806096   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:09.806278   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"734","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 22:14:09.806637   37994 pod_ready.go:97] node "multinode-009530" hosting pod "kube-proxy-m5spw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-009530" has status "Ready":"False"
	I0717 22:14:09.806655   37994 pod_ready.go:81] duration metric: took 401.835226ms waiting for pod "kube-proxy-m5spw" in "kube-system" namespace to be "Ready" ...
	E0717 22:14:09.806664   37994 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-009530" hosting pod "kube-proxy-m5spw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-009530" has status "Ready":"False"
	I0717 22:14:09.806671   37994 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:10.001129   37994 request.go:628] Waited for 194.390662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-009530
	I0717 22:14:10.001217   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-009530
	I0717 22:14:10.001228   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:10.001239   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:10.001250   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:10.008225   37994 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 22:14:10.008245   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:10.008253   37994 round_trippers.go:580]     Audit-Id: 42ed1042-e166-4bf4-9b8f-ab3d21d3e37c
	I0717 22:14:10.008258   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:10.008264   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:10.008269   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:10.008274   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:10.008280   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:09 GMT
	I0717 22:14:10.008411   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-009530","namespace":"kube-system","uid":"5da85194-923d-40f6-ab44-86209b1f057d","resourceVersion":"741","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"036d300e0ec7bf28a26e0c644008bbd5","kubernetes.io/config.mirror":"036d300e0ec7bf28a26e0c644008bbd5","kubernetes.io/config.seen":"2023-07-17T22:03:52.473677561Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0717 22:14:10.201146   37994 request.go:628] Waited for 192.36904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:10.201203   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:10.201228   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:10.201239   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:10.201247   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:10.205139   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:10.205165   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:10.205177   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:10 GMT
	I0717 22:14:10.205186   37994 round_trippers.go:580]     Audit-Id: c86b3199-fc20-4d3d-997e-0c3ad06db1a6
	I0717 22:14:10.205201   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:10.205210   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:10.205219   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:10.205227   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:10.205661   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"734","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 22:14:10.206109   37994 pod_ready.go:97] node "multinode-009530" hosting pod "kube-scheduler-multinode-009530" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-009530" has status "Ready":"False"
	I0717 22:14:10.206134   37994 pod_ready.go:81] duration metric: took 399.455104ms waiting for pod "kube-scheduler-multinode-009530" in "kube-system" namespace to be "Ready" ...
	E0717 22:14:10.206146   37994 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-009530" hosting pod "kube-scheduler-multinode-009530" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-009530" has status "Ready":"False"
	I0717 22:14:10.206159   37994 pod_ready.go:38] duration metric: took 1.736020802s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:14:10.206184   37994 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:14:10.219907   37994 command_runner.go:130] > -16
	I0717 22:14:10.219939   37994 ops.go:34] apiserver oom_adj: -16
	I0717 22:14:10.219946   37994 kubeadm.go:640] restartCluster took 22.830535733s
	I0717 22:14:10.219953   37994 kubeadm.go:406] StartCluster complete in 22.962395107s
	I0717 22:14:10.219970   37994 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:14:10.220044   37994 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:14:10.220835   37994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:14:10.221113   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:14:10.221205   37994 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:14:10.223965   37994 out.go:177] * Enabled addons: 
	I0717 22:14:10.221349   37994 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:14:10.221402   37994 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:14:10.225247   37994 addons.go:502] enable addons completed in 4.047999ms: enabled=[]
	I0717 22:14:10.225458   37994 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:14:10.225743   37994 round_trippers.go:463] GET https://192.168.39.222:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 22:14:10.225756   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:10.225767   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:10.225776   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:10.228617   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:10.228632   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:10.228641   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:10.228650   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:10.228660   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:10.228673   37994 round_trippers.go:580]     Content-Length: 291
	I0717 22:14:10.228685   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:10 GMT
	I0717 22:14:10.228698   37994 round_trippers.go:580]     Audit-Id: ef2e30c4-3f17-4d9f-b720-906e3d4fe12c
	I0717 22:14:10.228712   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:10.228744   37994 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c60c6831-559f-4b19-8b15-656b8972a35c","resourceVersion":"823","creationTimestamp":"2023-07-17T22:03:52Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0717 22:14:10.228898   37994 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-009530" context rescaled to 1 replicas
	I0717 22:14:10.228925   37994 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:14:10.230559   37994 out.go:177] * Verifying Kubernetes components...
	I0717 22:14:10.231920   37994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:14:10.326132   37994 command_runner.go:130] > apiVersion: v1
	I0717 22:14:10.326153   37994 command_runner.go:130] > data:
	I0717 22:14:10.326160   37994 command_runner.go:130] >   Corefile: |
	I0717 22:14:10.326166   37994 command_runner.go:130] >     .:53 {
	I0717 22:14:10.326170   37994 command_runner.go:130] >         log
	I0717 22:14:10.326176   37994 command_runner.go:130] >         errors
	I0717 22:14:10.326181   37994 command_runner.go:130] >         health {
	I0717 22:14:10.326188   37994 command_runner.go:130] >            lameduck 5s
	I0717 22:14:10.326193   37994 command_runner.go:130] >         }
	I0717 22:14:10.326200   37994 command_runner.go:130] >         ready
	I0717 22:14:10.326209   37994 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0717 22:14:10.326220   37994 command_runner.go:130] >            pods insecure
	I0717 22:14:10.326230   37994 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0717 22:14:10.326241   37994 command_runner.go:130] >            ttl 30
	I0717 22:14:10.326248   37994 command_runner.go:130] >         }
	I0717 22:14:10.326257   37994 command_runner.go:130] >         prometheus :9153
	I0717 22:14:10.326264   37994 command_runner.go:130] >         hosts {
	I0717 22:14:10.326274   37994 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0717 22:14:10.326291   37994 command_runner.go:130] >            fallthrough
	I0717 22:14:10.326300   37994 command_runner.go:130] >         }
	I0717 22:14:10.326309   37994 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0717 22:14:10.326321   37994 command_runner.go:130] >            max_concurrent 1000
	I0717 22:14:10.326330   37994 command_runner.go:130] >         }
	I0717 22:14:10.326337   37994 command_runner.go:130] >         cache 30
	I0717 22:14:10.326346   37994 command_runner.go:130] >         loop
	I0717 22:14:10.326356   37994 command_runner.go:130] >         reload
	I0717 22:14:10.326364   37994 command_runner.go:130] >         loadbalance
	I0717 22:14:10.326374   37994 command_runner.go:130] >     }
	I0717 22:14:10.326382   37994 command_runner.go:130] > kind: ConfigMap
	I0717 22:14:10.326389   37994 command_runner.go:130] > metadata:
	I0717 22:14:10.326399   37994 command_runner.go:130] >   creationTimestamp: "2023-07-17T22:03:52Z"
	I0717 22:14:10.326409   37994 command_runner.go:130] >   name: coredns
	I0717 22:14:10.326420   37994 command_runner.go:130] >   namespace: kube-system
	I0717 22:14:10.326430   37994 command_runner.go:130] >   resourceVersion: "397"
	I0717 22:14:10.326447   37994 command_runner.go:130] >   uid: 74a460b6-e979-4777-9478-ab3352b785ed
	I0717 22:14:10.328484   37994 node_ready.go:35] waiting up to 6m0s for node "multinode-009530" to be "Ready" ...
	I0717 22:14:10.328539   37994 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 22:14:10.400803   37994 request.go:628] Waited for 72.240745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:10.400852   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:10.400860   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:10.400871   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:10.400896   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:10.404221   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:10.404242   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:10.404252   37994 round_trippers.go:580]     Audit-Id: e7d9cac6-6d53-4fbe-ade0-a554c1ff6e96
	I0717 22:14:10.404260   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:10.404268   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:10.404275   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:10.404284   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:10.404296   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:10 GMT
	I0717 22:14:10.404778   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"734","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0717 22:14:10.905968   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:10.905992   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:10.906001   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:10.906008   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:10.909220   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:10.909244   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:10.909253   37994 round_trippers.go:580]     Audit-Id: fc9d0220-ec3d-4e9f-a8c5-adb854820162
	I0717 22:14:10.909262   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:10.909271   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:10.909278   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:10.909286   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:10.909294   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:10 GMT
	I0717 22:14:10.909561   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:10.909980   37994 node_ready.go:49] node "multinode-009530" has status "Ready":"True"
	I0717 22:14:10.910000   37994 node_ready.go:38] duration metric: took 581.496811ms waiting for node "multinode-009530" to be "Ready" ...
	I0717 22:14:10.910008   37994 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:14:10.910074   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:14:10.910079   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:10.910085   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:10.910095   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:10.913540   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:10.913558   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:10.913568   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:10.913577   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:10.913586   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:10 GMT
	I0717 22:14:10.913595   37994 round_trippers.go:580]     Audit-Id: 99c125c8-dd3a-4305-9b14-d3fdbec2a99a
	I0717 22:14:10.913611   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:10.913621   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:10.916372   37994 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"854"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"742","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82969 chars]
	I0717 22:14:10.918713   37994 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:11.001081   37994 request.go:628] Waited for 82.299493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:14:11.001133   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:14:11.001138   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:11.001146   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:11.001153   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:11.003747   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:11.003771   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:11.003782   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:11.003790   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:11.003799   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:11.003809   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:11.003821   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:10 GMT
	I0717 22:14:11.003834   37994 round_trippers.go:580]     Audit-Id: 16d544e0-b7db-4af2-a584-d3d6145661ab
	I0717 22:14:11.004034   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"742","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 22:14:11.200884   37994 request.go:628] Waited for 196.365907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:11.200934   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:11.200952   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:11.200960   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:11.200966   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:11.203874   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:11.203899   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:11.203909   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:11.203918   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:11.203926   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:11 GMT
	I0717 22:14:11.203939   37994 round_trippers.go:580]     Audit-Id: e3e040f8-1451-4a43-9e88-683bc9a568b7
	I0717 22:14:11.203948   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:11.203956   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:11.204111   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:11.705382   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:14:11.705415   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:11.705427   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:11.705439   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:11.708419   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:11.708445   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:11.708455   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:11 GMT
	I0717 22:14:11.708462   37994 round_trippers.go:580]     Audit-Id: 58d1ed61-f84b-48c7-826f-d9404ce9e324
	I0717 22:14:11.708471   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:11.708478   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:11.708486   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:11.708493   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:11.708736   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"742","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 22:14:11.709206   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:11.709222   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:11.709229   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:11.709235   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:11.712368   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:11.712391   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:11.712402   37994 round_trippers.go:580]     Audit-Id: 698270b5-ce0b-43ed-8d46-393a9de05302
	I0717 22:14:11.712412   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:11.712419   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:11.712427   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:11.712436   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:11.712448   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:11 GMT
	I0717 22:14:11.712752   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:12.205571   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:14:12.205607   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:12.205618   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:12.205628   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:12.208677   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:12.208699   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:12.208709   37994 round_trippers.go:580]     Audit-Id: 3f70a120-6cb0-4b00-9d72-da91ac6c7759
	I0717 22:14:12.208717   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:12.208725   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:12.208733   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:12.208740   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:12.208747   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:12 GMT
	I0717 22:14:12.208949   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"742","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 22:14:12.209465   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:12.209480   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:12.209487   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:12.209493   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:12.211860   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:12.211881   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:12.211891   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:12.211901   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:12 GMT
	I0717 22:14:12.211911   37994 round_trippers.go:580]     Audit-Id: 1b942a08-2806-4705-a64f-4b59c99948ff
	I0717 22:14:12.211918   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:12.211924   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:12.211929   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:12.212243   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:12.705297   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:14:12.705324   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:12.705333   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:12.705339   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:12.708139   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:12.708157   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:12.708164   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:12.708170   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:12.708175   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:12.708181   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:12 GMT
	I0717 22:14:12.708186   37994 round_trippers.go:580]     Audit-Id: 8e5b5ed1-ef11-4c7b-9d9c-34aa62d6d754
	I0717 22:14:12.708191   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:12.708783   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"742","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 22:14:12.709228   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:12.709241   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:12.709248   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:12.709254   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:12.712145   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:12.712173   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:12.712183   37994 round_trippers.go:580]     Audit-Id: 041bd3bc-d543-48b9-824e-b00a388a6390
	I0717 22:14:12.712191   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:12.712199   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:12.712209   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:12.712219   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:12.712228   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:12 GMT
	I0717 22:14:12.712415   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:13.204996   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:14:13.205022   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:13.205032   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:13.205041   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:13.208242   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:13.208267   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:13.208274   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:13.208279   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:13.208285   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:13.208290   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:13 GMT
	I0717 22:14:13.208296   37994 round_trippers.go:580]     Audit-Id: f47e5b97-2d0d-4d62-97f2-16fb519034d7
	I0717 22:14:13.208301   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:13.208414   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"742","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 22:14:13.208903   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:13.208918   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:13.208932   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:13.208940   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:13.211261   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:13.211278   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:13.211287   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:13.211296   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:13.211303   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:13 GMT
	I0717 22:14:13.211313   37994 round_trippers.go:580]     Audit-Id: 59985a86-c929-45b5-a2f2-a05eca06064a
	I0717 22:14:13.211322   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:13.211332   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:13.212038   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:13.212327   37994 pod_ready.go:102] pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace has status "Ready":"False"
	I0717 22:14:13.705543   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:14:13.705574   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:13.705586   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:13.705602   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:13.714655   37994 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 22:14:13.714690   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:13.714706   37994 round_trippers.go:580]     Audit-Id: 8c1d9f10-b091-457c-9402-1a36096830f8
	I0717 22:14:13.714715   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:13.714723   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:13.714732   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:13.714740   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:13.714749   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:13 GMT
	I0717 22:14:13.715016   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"742","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 22:14:13.715613   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:13.715636   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:13.715647   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:13.715658   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:13.725739   37994 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 22:14:13.725764   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:13.725771   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:13.725777   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:13.725782   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:13 GMT
	I0717 22:14:13.725788   37994 round_trippers.go:580]     Audit-Id: 47329f95-b438-452c-a16a-6e7545df4a80
	I0717 22:14:13.725794   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:13.725804   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:13.725958   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:14.205690   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:14:14.205718   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:14.205731   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:14.205741   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:14.208444   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:14.208459   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:14.208466   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:14.208472   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:14.208478   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:14.208483   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:14 GMT
	I0717 22:14:14.208489   37994 round_trippers.go:580]     Audit-Id: ce018042-8df9-4404-b4ec-e4f23f1a1fd5
	I0717 22:14:14.208496   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:14.208657   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"742","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 22:14:14.209109   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:14.209125   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:14.209136   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:14.209146   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:14.211159   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:14.211171   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:14.211177   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:14.211182   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:14.211189   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:14.211197   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:14.211206   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:14 GMT
	I0717 22:14:14.211220   37994 round_trippers.go:580]     Audit-Id: 1ef93eec-58a1-4d33-8f32-e7de5a1891bc
	I0717 22:14:14.211384   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:14.705053   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:14:14.705077   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:14.705086   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:14.705093   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:14.707734   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:14.707758   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:14.707767   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:14.707773   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:14 GMT
	I0717 22:14:14.707779   37994 round_trippers.go:580]     Audit-Id: 7a2444c0-f6e2-426c-9e02-2e68f5bb9537
	I0717 22:14:14.707785   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:14.707790   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:14.707798   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:14.707980   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"742","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0717 22:14:14.708523   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:14.708548   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:14.708556   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:14.708566   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:14.711016   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:14.711035   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:14.711046   37994 round_trippers.go:580]     Audit-Id: 7106da7f-76f7-480f-ad31-ee6c3ce5d042
	I0717 22:14:14.711055   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:14.711066   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:14.711078   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:14.711086   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:14.711096   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:14 GMT
	I0717 22:14:14.711396   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:15.204826   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:14:15.204854   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:15.204867   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:15.204876   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:15.207834   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:15.207860   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:15.207871   37994 round_trippers.go:580]     Audit-Id: d6f17d54-ac3f-4cac-a805-91dc3fdaa294
	I0717 22:14:15.207887   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:15.207896   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:15.207916   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:15.207926   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:15.207935   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:15 GMT
	I0717 22:14:15.208097   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"866","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0717 22:14:15.208680   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:15.208700   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:15.208711   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:15.208730   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:15.210958   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:15.210979   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:15.210989   37994 round_trippers.go:580]     Audit-Id: 0bacb666-05b5-4b5f-ac37-dce3fe8e72bd
	I0717 22:14:15.210999   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:15.211012   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:15.211019   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:15.211027   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:15.211040   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:15 GMT
	I0717 22:14:15.211186   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:15.211530   37994 pod_ready.go:92] pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:14:15.211545   37994 pod_ready.go:81] duration metric: took 4.292813204s waiting for pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:15.211555   37994 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:15.211630   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-009530
	I0717 22:14:15.211639   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:15.211649   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:15.211663   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:15.213970   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:15.213986   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:15.213994   37994 round_trippers.go:580]     Audit-Id: cbc18462-2947-4947-b2fd-3982d21ec36f
	I0717 22:14:15.214002   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:15.214011   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:15.214024   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:15.214037   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:15.214051   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:15 GMT
	I0717 22:14:15.214276   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-009530","namespace":"kube-system","uid":"aed75ad9-0156-4275-8a41-b68d09c15660","resourceVersion":"857","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.222:2379","kubernetes.io/config.hash":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.mirror":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.seen":"2023-07-17T22:03:52.473671520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0717 22:14:15.214603   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:15.214617   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:15.214627   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:15.214637   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:15.217699   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:15.217713   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:15.217720   37994 round_trippers.go:580]     Audit-Id: fc49dd4c-6e35-4b37-a634-c253bdfa4568
	I0717 22:14:15.217726   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:15.217742   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:15.217753   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:15.217767   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:15.217777   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:15 GMT
	I0717 22:14:15.217920   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:15.218192   37994 pod_ready.go:92] pod "etcd-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:14:15.218213   37994 pod_ready.go:81] duration metric: took 6.647989ms waiting for pod "etcd-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:15.218235   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:15.218287   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-009530
	I0717 22:14:15.218296   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:15.218307   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:15.218321   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:15.221105   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:15.221123   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:15.221135   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:15.221143   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:15.221151   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:15.221159   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:15.221167   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:15 GMT
	I0717 22:14:15.221177   37994 round_trippers.go:580]     Audit-Id: 2d59baab-fb68-44e6-a295-b4613111e4dc
	I0717 22:14:15.221510   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-009530","namespace":"kube-system","uid":"958b1550-f15f-49f3-acf3-dbab69f82fb8","resourceVersion":"856","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.222:8443","kubernetes.io/config.hash":"49e7615bd1aa66d6e32161e120c48180","kubernetes.io/config.mirror":"49e7615bd1aa66d6e32161e120c48180","kubernetes.io/config.seen":"2023-07-17T22:03:52.473675304Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0717 22:14:15.221865   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:15.221876   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:15.221883   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:15.221891   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:15.226983   37994 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 22:14:15.227003   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:15.227016   37994 round_trippers.go:580]     Audit-Id: 639e463b-063b-42eb-9947-bacc1b1b7454
	I0717 22:14:15.227025   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:15.227037   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:15.227045   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:15.227058   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:15.227070   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:15 GMT
	I0717 22:14:15.227203   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:15.227573   37994 pod_ready.go:92] pod "kube-apiserver-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:14:15.227598   37994 pod_ready.go:81] duration metric: took 9.350516ms waiting for pod "kube-apiserver-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:15.227616   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:15.401021   37994 request.go:628] Waited for 173.319472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-009530
	I0717 22:14:15.401084   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-009530
	I0717 22:14:15.401093   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:15.401106   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:15.401121   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:15.403926   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:15.403950   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:15.403960   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:15.403969   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:15.403977   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:15 GMT
	I0717 22:14:15.403992   37994 round_trippers.go:580]     Audit-Id: f8bfe24b-488b-48ea-a25e-4a74ea07bab0
	I0717 22:14:15.404000   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:15.404010   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:15.404372   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-009530","namespace":"kube-system","uid":"1c9dba7c-6497-41f0-b751-17988278c710","resourceVersion":"864","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d8b61663949a18745a23bcf487c538f2","kubernetes.io/config.mirror":"d8b61663949a18745a23bcf487c538f2","kubernetes.io/config.seen":"2023-07-17T22:03:52.473676600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0717 22:14:15.600234   37994 request.go:628] Waited for 195.32727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:15.600291   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:15.600296   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:15.600307   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:15.600313   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:15.603047   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:15.603069   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:15.603078   37994 round_trippers.go:580]     Audit-Id: 5600bd2b-35b1-483c-b9f7-5a593cc535e3
	I0717 22:14:15.603088   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:15.603115   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:15.603126   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:15.603134   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:15.603150   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:15 GMT
	I0717 22:14:15.603328   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:15.603650   37994 pod_ready.go:92] pod "kube-controller-manager-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:14:15.603665   37994 pod_ready.go:81] duration metric: took 376.037663ms waiting for pod "kube-controller-manager-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:15.603676   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6rxv8" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:15.801092   37994 request.go:628] Waited for 197.356892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rxv8
	I0717 22:14:15.801156   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rxv8
	I0717 22:14:15.801161   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:15.801169   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:15.801175   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:15.804212   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:15.804235   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:15.804248   37994 round_trippers.go:580]     Audit-Id: 2b82f9ff-eb06-458d-9158-f48815e899f2
	I0717 22:14:15.804256   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:15.804264   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:15.804271   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:15.804278   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:15.804285   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:15 GMT
	I0717 22:14:15.804486   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6rxv8","generateName":"kube-proxy-","namespace":"kube-system","uid":"0d197eb7-b5bd-446a-b2f4-c513c06afcbe","resourceVersion":"512","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0717 22:14:16.001268   37994 request.go:628] Waited for 196.379188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:14:16.001326   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:14:16.001331   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:16.001345   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:16.001357   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:16.004103   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:16.004128   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:16.004138   37994 round_trippers.go:580]     Audit-Id: d6cc73a9-74b2-46fc-a880-faab5c1734ad
	I0717 22:14:16.004148   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:16.004157   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:16.004162   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:16.004169   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:16.004181   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:15 GMT
	I0717 22:14:16.004286   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"329b572e-b661-4301-b778-f37c0f69b53d","resourceVersion":"731","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I0717 22:14:16.004544   37994 pod_ready.go:92] pod "kube-proxy-6rxv8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:14:16.004559   37994 pod_ready.go:81] duration metric: took 400.876967ms waiting for pod "kube-proxy-6rxv8" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:16.004570   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jv9h4" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:16.201005   37994 request.go:628] Waited for 196.380271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jv9h4
	I0717 22:14:16.201057   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jv9h4
	I0717 22:14:16.201062   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:16.201070   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:16.201077   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:16.204158   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:16.204178   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:16.204185   37994 round_trippers.go:580]     Audit-Id: 09ba295f-a26e-48c3-9d73-4d279c569df4
	I0717 22:14:16.204191   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:16.204196   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:16.204201   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:16.204207   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:16.204212   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:16 GMT
	I0717 22:14:16.204379   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jv9h4","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3b140d5-ec70-4ffe-8372-7fb67d0fb0c9","resourceVersion":"711","creationTimestamp":"2023-07-17T22:05:32Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:05:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0717 22:14:16.401252   37994 request.go:628] Waited for 196.356808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m03
	I0717 22:14:16.401302   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m03
	I0717 22:14:16.401307   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:16.401315   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:16.401321   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:16.404360   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:16.404382   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:16.404390   37994 round_trippers.go:580]     Audit-Id: f89cb8b4-74c3-49a1-926c-841f96ff8920
	I0717 22:14:16.404396   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:16.404403   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:16.404408   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:16.404414   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:16.404420   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:16 GMT
	I0717 22:14:16.404708   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m03","uid":"cadf8157-0bcb-4971-8496-da993f9c43bf","resourceVersion":"818","creationTimestamp":"2023-07-17T22:06:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:06:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0717 22:14:16.404955   37994 pod_ready.go:92] pod "kube-proxy-jv9h4" in "kube-system" namespace has status "Ready":"True"
	I0717 22:14:16.404968   37994 pod_ready.go:81] duration metric: took 400.393083ms waiting for pod "kube-proxy-jv9h4" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:16.404976   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m5spw" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:16.600335   37994 request.go:628] Waited for 195.284585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5spw
	I0717 22:14:16.600405   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5spw
	I0717 22:14:16.600409   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:16.600417   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:16.600424   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:16.603544   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:16.603561   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:16.603567   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:16 GMT
	I0717 22:14:16.603573   37994 round_trippers.go:580]     Audit-Id: 18ee597e-d331-4bf3-8546-f2b63003e22f
	I0717 22:14:16.603579   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:16.603585   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:16.603596   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:16.603605   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:16.603996   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m5spw","generateName":"kube-proxy-","namespace":"kube-system","uid":"a4bf0eb3-126a-463e-a670-b4793e1c5ec9","resourceVersion":"825","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 22:14:16.800803   37994 request.go:628] Waited for 196.403251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:16.800861   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:16.800867   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:16.800875   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:16.800881   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:16.803871   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:16.803889   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:16.803895   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:16.803901   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:16.803906   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:16.803911   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:16.803916   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:16 GMT
	I0717 22:14:16.803922   37994 round_trippers.go:580]     Audit-Id: aa3a0561-74db-4af0-be6d-ef4ac7487641
	I0717 22:14:16.804168   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:16.804596   37994 pod_ready.go:92] pod "kube-proxy-m5spw" in "kube-system" namespace has status "Ready":"True"
	I0717 22:14:16.804620   37994 pod_ready.go:81] duration metric: took 399.625773ms waiting for pod "kube-proxy-m5spw" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:16.804633   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:17.001110   37994 request.go:628] Waited for 196.408257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-009530
	I0717 22:14:17.001173   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-009530
	I0717 22:14:17.001178   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:17.001186   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:17.001193   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:17.005735   37994 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:14:17.005763   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:17.005773   37994 round_trippers.go:580]     Audit-Id: cd246209-1f00-413d-9070-12d01a417e9f
	I0717 22:14:17.005782   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:17.005791   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:17.005799   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:17.005808   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:17.005820   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:16 GMT
	I0717 22:14:17.005939   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-009530","namespace":"kube-system","uid":"5da85194-923d-40f6-ab44-86209b1f057d","resourceVersion":"859","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"036d300e0ec7bf28a26e0c644008bbd5","kubernetes.io/config.mirror":"036d300e0ec7bf28a26e0c644008bbd5","kubernetes.io/config.seen":"2023-07-17T22:03:52.473677561Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0717 22:14:17.200744   37994 request.go:628] Waited for 194.340797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:17.200810   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:14:17.200816   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:17.200825   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:17.200834   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:17.204269   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:17.204286   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:17.204296   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:17.204307   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:17 GMT
	I0717 22:14:17.204320   37994 round_trippers.go:580]     Audit-Id: 4d8d0354-bd3e-45e3-aa5e-24e5b5336b48
	I0717 22:14:17.204331   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:17.204343   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:17.204369   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:17.205243   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0717 22:14:17.205631   37994 pod_ready.go:92] pod "kube-scheduler-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:14:17.205650   37994 pod_ready.go:81] duration metric: took 401.006524ms waiting for pod "kube-scheduler-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:14:17.205659   37994 pod_ready.go:38] duration metric: took 6.295625922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:14:17.205673   37994 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:14:17.205714   37994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:14:17.220561   37994 command_runner.go:130] > 1071
	I0717 22:14:17.220599   37994 api_server.go:72] duration metric: took 6.991615582s to wait for apiserver process to appear ...
	I0717 22:14:17.220611   37994 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:14:17.220629   37994 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I0717 22:14:17.226797   37994 api_server.go:279] https://192.168.39.222:8443/healthz returned 200:
	ok
	I0717 22:14:17.226847   37994 round_trippers.go:463] GET https://192.168.39.222:8443/version
	I0717 22:14:17.226852   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:17.226860   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:17.226869   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:17.227757   37994 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 22:14:17.227779   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:17.227785   37994 round_trippers.go:580]     Audit-Id: 91d825c5-853d-4678-8186-ae127e217ede
	I0717 22:14:17.227791   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:17.227796   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:17.227804   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:17.227813   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:17.227829   37994 round_trippers.go:580]     Content-Length: 263
	I0717 22:14:17.227837   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:17 GMT
	I0717 22:14:17.227881   37994 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 22:14:17.227931   37994 api_server.go:141] control plane version: v1.27.3
	I0717 22:14:17.227946   37994 api_server.go:131] duration metric: took 7.328834ms to wait for apiserver health ...
	I0717 22:14:17.227956   37994 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:14:17.401200   37994 request.go:628] Waited for 173.18749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:14:17.401262   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:14:17.401276   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:17.401284   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:17.401291   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:17.405868   37994 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:14:17.405892   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:17.405903   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:17 GMT
	I0717 22:14:17.405911   37994 round_trippers.go:580]     Audit-Id: 08add2d1-965e-442b-8c5f-a7e3f4c56e5a
	I0717 22:14:17.405920   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:17.405929   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:17.405937   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:17.405947   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:17.407578   37994 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"875"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"866","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81890 chars]
	I0717 22:14:17.410030   37994 system_pods.go:59] 12 kube-system pods found
	I0717 22:14:17.410051   37994 system_pods.go:61] "coredns-5d78c9869d-z4fr8" [1fb1d992-a7b6-4259-ba61-dc4092c65c44] Running
	I0717 22:14:17.410056   37994 system_pods.go:61] "etcd-multinode-009530" [aed75ad9-0156-4275-8a41-b68d09c15660] Running
	I0717 22:14:17.410064   37994 system_pods.go:61] "kindnet-4tb65" [da2b2174-4ab2-4dc9-99ba-16cc00b0c7f2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 22:14:17.410075   37994 system_pods.go:61] "kindnet-gh4hn" [d474f5c5-bd94-411b-8d69-b3871c2b5653] Running
	I0717 22:14:17.410091   37994 system_pods.go:61] "kindnet-zldcf" [faa5128f-071f-485e-958c-f3c4222704da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 22:14:17.410101   37994 system_pods.go:61] "kube-apiserver-multinode-009530" [958b1550-f15f-49f3-acf3-dbab69f82fb8] Running
	I0717 22:14:17.410109   37994 system_pods.go:61] "kube-controller-manager-multinode-009530" [1c9dba7c-6497-41f0-b751-17988278c710] Running
	I0717 22:14:17.410113   37994 system_pods.go:61] "kube-proxy-6rxv8" [0d197eb7-b5bd-446a-b2f4-c513c06afcbe] Running
	I0717 22:14:17.410118   37994 system_pods.go:61] "kube-proxy-jv9h4" [f3b140d5-ec70-4ffe-8372-7fb67d0fb0c9] Running
	I0717 22:14:17.410122   37994 system_pods.go:61] "kube-proxy-m5spw" [a4bf0eb3-126a-463e-a670-b4793e1c5ec9] Running
	I0717 22:14:17.410129   37994 system_pods.go:61] "kube-scheduler-multinode-009530" [5da85194-923d-40f6-ab44-86209b1f057d] Running
	I0717 22:14:17.410133   37994 system_pods.go:61] "storage-provisioner" [d8f48e9c-2b37-4edc-89e4-d032cac0d573] Running
	I0717 22:14:17.410140   37994 system_pods.go:74] duration metric: took 182.179892ms to wait for pod list to return data ...
	I0717 22:14:17.410149   37994 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:14:17.601240   37994 request.go:628] Waited for 190.998091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/default/serviceaccounts
	I0717 22:14:17.601308   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/default/serviceaccounts
	I0717 22:14:17.601315   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:17.601325   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:17.601333   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:17.604218   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:14:17.604239   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:17.604248   37994 round_trippers.go:580]     Audit-Id: 30bfe674-3057-4abc-b2db-ffdb32284704
	I0717 22:14:17.604256   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:17.604264   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:17.604272   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:17.604286   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:17.604296   37994 round_trippers.go:580]     Content-Length: 261
	I0717 22:14:17.604309   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:17 GMT
	I0717 22:14:17.604334   37994 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"876"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"558ff881-614f-4fb6-9e77-8488151c76a7","resourceVersion":"345","creationTimestamp":"2023-07-17T22:04:04Z"}}]}
	I0717 22:14:17.604539   37994 default_sa.go:45] found service account: "default"
	I0717 22:14:17.604559   37994 default_sa.go:55] duration metric: took 194.401668ms for default service account to be created ...
	I0717 22:14:17.604568   37994 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:14:17.800969   37994 request.go:628] Waited for 196.338226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:14:17.801025   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:14:17.801030   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:17.801037   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:17.801044   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:17.805336   37994 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:14:17.805363   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:17.805374   37994 round_trippers.go:580]     Audit-Id: ade6d42a-aacd-48e2-a524-e020bf8ba9ad
	I0717 22:14:17.805383   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:17.805397   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:17.805406   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:17.805418   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:17.805430   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:17 GMT
	I0717 22:14:17.806248   37994 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"882"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"866","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81524 chars]
	I0717 22:14:17.808641   37994 system_pods.go:86] 12 kube-system pods found
	I0717 22:14:17.808663   37994 system_pods.go:89] "coredns-5d78c9869d-z4fr8" [1fb1d992-a7b6-4259-ba61-dc4092c65c44] Running
	I0717 22:14:17.808676   37994 system_pods.go:89] "etcd-multinode-009530" [aed75ad9-0156-4275-8a41-b68d09c15660] Running
	I0717 22:14:17.808681   37994 system_pods.go:89] "kindnet-4tb65" [da2b2174-4ab2-4dc9-99ba-16cc00b0c7f2] Running
	I0717 22:14:17.808685   37994 system_pods.go:89] "kindnet-gh4hn" [d474f5c5-bd94-411b-8d69-b3871c2b5653] Running
	I0717 22:14:17.808692   37994 system_pods.go:89] "kindnet-zldcf" [faa5128f-071f-485e-958c-f3c4222704da] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 22:14:17.808697   37994 system_pods.go:89] "kube-apiserver-multinode-009530" [958b1550-f15f-49f3-acf3-dbab69f82fb8] Running
	I0717 22:14:17.808702   37994 system_pods.go:89] "kube-controller-manager-multinode-009530" [1c9dba7c-6497-41f0-b751-17988278c710] Running
	I0717 22:14:17.808706   37994 system_pods.go:89] "kube-proxy-6rxv8" [0d197eb7-b5bd-446a-b2f4-c513c06afcbe] Running
	I0717 22:14:17.808709   37994 system_pods.go:89] "kube-proxy-jv9h4" [f3b140d5-ec70-4ffe-8372-7fb67d0fb0c9] Running
	I0717 22:14:17.808714   37994 system_pods.go:89] "kube-proxy-m5spw" [a4bf0eb3-126a-463e-a670-b4793e1c5ec9] Running
	I0717 22:14:17.808718   37994 system_pods.go:89] "kube-scheduler-multinode-009530" [5da85194-923d-40f6-ab44-86209b1f057d] Running
	I0717 22:14:17.808722   37994 system_pods.go:89] "storage-provisioner" [d8f48e9c-2b37-4edc-89e4-d032cac0d573] Running
	I0717 22:14:17.808728   37994 system_pods.go:126] duration metric: took 204.156023ms to wait for k8s-apps to be running ...
	I0717 22:14:17.808736   37994 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:14:17.808776   37994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:14:17.822245   37994 system_svc.go:56] duration metric: took 13.499763ms WaitForService to wait for kubelet.
	I0717 22:14:17.822271   37994 kubeadm.go:581] duration metric: took 7.593286945s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:14:17.822292   37994 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:14:18.000691   37994 request.go:628] Waited for 178.336656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes
	I0717 22:14:18.000747   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes
	I0717 22:14:18.000754   37994 round_trippers.go:469] Request Headers:
	I0717 22:14:18.000765   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:14:18.000782   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:14:18.004641   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:14:18.004666   37994 round_trippers.go:577] Response Headers:
	I0717 22:14:18.004676   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:14:18.004686   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:14:18.004695   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:14:18.004704   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:14:17 GMT
	I0717 22:14:18.004711   37994 round_trippers.go:580]     Audit-Id: 3c937ee2-183f-4d77-9ab2-bef04f43b8d0
	I0717 22:14:18.004716   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:14:18.004954   37994 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"882"},"items":[{"metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"854","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15076 chars]
	I0717 22:14:18.005747   37994 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:14:18.005768   37994 node_conditions.go:123] node cpu capacity is 2
	I0717 22:14:18.005780   37994 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:14:18.005788   37994 node_conditions.go:123] node cpu capacity is 2
	I0717 22:14:18.005794   37994 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:14:18.005802   37994 node_conditions.go:123] node cpu capacity is 2
	I0717 22:14:18.005808   37994 node_conditions.go:105] duration metric: took 183.511924ms to run NodePressure ...
	I0717 22:14:18.005820   37994 start.go:228] waiting for startup goroutines ...
	I0717 22:14:18.005834   37994 start.go:233] waiting for cluster config update ...
	I0717 22:14:18.005843   37994 start.go:242] writing updated cluster config ...
	I0717 22:14:18.006398   37994 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:14:18.006532   37994 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/config.json ...
	I0717 22:14:18.008843   37994 out.go:177] * Starting worker node multinode-009530-m02 in cluster multinode-009530
	I0717 22:14:18.010274   37994 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:14:18.010298   37994 cache.go:57] Caching tarball of preloaded images
	I0717 22:14:18.010408   37994 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 22:14:18.010422   37994 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 22:14:18.010508   37994 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/config.json ...
	I0717 22:14:18.010684   37994 start.go:365] acquiring machines lock for multinode-009530-m02: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:14:18.010731   37994 start.go:369] acquired machines lock for "multinode-009530-m02" in 25.724µs
	I0717 22:14:18.010744   37994 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:14:18.010754   37994 fix.go:54] fixHost starting: m02
	I0717 22:14:18.010997   37994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:14:18.011036   37994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:14:18.025080   37994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40187
	I0717 22:14:18.025535   37994 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:14:18.025996   37994 main.go:141] libmachine: Using API Version  1
	I0717 22:14:18.026016   37994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:14:18.026309   37994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:14:18.026440   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:14:18.026562   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetState
	I0717 22:14:18.028241   37994 fix.go:102] recreateIfNeeded on multinode-009530-m02: state=Running err=<nil>
	W0717 22:14:18.028255   37994 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:14:18.030123   37994 out.go:177] * Updating the running kvm2 "multinode-009530-m02" VM ...
	I0717 22:14:18.031660   37994 machine.go:88] provisioning docker machine ...
	I0717 22:14:18.031678   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:14:18.031886   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetMachineName
	I0717 22:14:18.032054   37994 buildroot.go:166] provisioning hostname "multinode-009530-m02"
	I0717 22:14:18.032071   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetMachineName
	I0717 22:14:18.032183   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:14:18.034916   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:14:18.035418   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:14:18.035444   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:14:18.035559   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:14:18.035711   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:14:18.035845   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:14:18.035972   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:14:18.036104   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:14:18.036510   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0717 22:14:18.036528   37994 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-009530-m02 && echo "multinode-009530-m02" | sudo tee /etc/hostname
	I0717 22:14:18.172749   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-009530-m02
	
	I0717 22:14:18.172784   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:14:18.175768   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:14:18.176133   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:14:18.176173   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:14:18.176356   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:14:18.176522   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:14:18.176672   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:14:18.176796   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:14:18.176927   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:14:18.177306   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0717 22:14:18.177323   37994 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-009530-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-009530-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-009530-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:14:18.298696   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:14:18.298724   37994 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:14:18.298738   37994 buildroot.go:174] setting up certificates
	I0717 22:14:18.298745   37994 provision.go:83] configureAuth start
	I0717 22:14:18.298753   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetMachineName
	I0717 22:14:18.298999   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetIP
	I0717 22:14:18.301335   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:14:18.301690   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:14:18.301735   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:14:18.301851   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:14:18.303799   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:14:18.304153   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:14:18.304184   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:14:18.304315   37994 provision.go:138] copyHostCerts
	I0717 22:14:18.304340   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:14:18.304366   37994 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:14:18.304374   37994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:14:18.304437   37994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:14:18.304519   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:14:18.304538   37994 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:14:18.304545   37994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:14:18.304570   37994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:14:18.304648   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:14:18.304666   37994 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:14:18.304674   37994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:14:18.304708   37994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:14:18.304761   37994 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.multinode-009530-m02 san=[192.168.39.146 192.168.39.146 localhost 127.0.0.1 minikube multinode-009530-m02]
	I0717 22:14:18.420787   37994 provision.go:172] copyRemoteCerts
	I0717 22:14:18.420835   37994 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:14:18.420856   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:14:18.423485   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:14:18.423777   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:14:18.423801   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:14:18.423984   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:14:18.424177   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:14:18.424364   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:14:18.424522   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/id_rsa Username:docker}
	I0717 22:14:18.515544   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 22:14:18.515639   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:14:18.540726   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 22:14:18.540812   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0717 22:14:18.564342   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 22:14:18.564403   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:14:18.588392   37994 provision.go:86] duration metric: configureAuth took 289.636819ms
	I0717 22:14:18.588413   37994 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:14:18.588606   37994 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:14:18.588673   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:14:18.591260   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:14:18.591698   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:14:18.591726   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:14:18.591940   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:14:18.592141   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:14:18.592321   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:14:18.592446   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:14:18.592615   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:14:18.593186   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0717 22:14:18.593212   37994 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:15:49.255550   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:15:49.255574   37994 machine.go:91] provisioned docker machine in 1m31.223901716s
	I0717 22:15:49.255585   37994 start.go:300] post-start starting for "multinode-009530-m02" (driver="kvm2")
	I0717 22:15:49.255595   37994 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:15:49.255611   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:15:49.255963   37994 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:15:49.255999   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:15:49.258849   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:15:49.259197   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:15:49.259221   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:15:49.259360   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:15:49.259560   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:15:49.259685   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:15:49.259874   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/id_rsa Username:docker}
	I0717 22:15:49.356446   37994 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:15:49.360829   37994 command_runner.go:130] > NAME=Buildroot
	I0717 22:15:49.360848   37994 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0717 22:15:49.360852   37994 command_runner.go:130] > ID=buildroot
	I0717 22:15:49.360857   37994 command_runner.go:130] > VERSION_ID=2021.02.12
	I0717 22:15:49.360862   37994 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0717 22:15:49.361111   37994 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:15:49.361128   37994 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:15:49.361203   37994 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:15:49.361285   37994 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:15:49.361298   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> /etc/ssl/certs/229902.pem
	I0717 22:15:49.361393   37994 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:15:49.369505   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:15:49.392302   37994 start.go:303] post-start completed in 136.70347ms
	I0717 22:15:49.392326   37994 fix.go:56] fixHost completed within 1m31.381572539s
	I0717 22:15:49.392345   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:15:49.395104   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:15:49.395450   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:15:49.395482   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:15:49.395616   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:15:49.395822   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:15:49.395990   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:15:49.396125   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:15:49.396298   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:15:49.396891   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0717 22:15:49.396908   37994 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:15:49.519360   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689632149.510812764
	
	I0717 22:15:49.519382   37994 fix.go:206] guest clock: 1689632149.510812764
	I0717 22:15:49.519389   37994 fix.go:219] Guest: 2023-07-17 22:15:49.510812764 +0000 UTC Remote: 2023-07-17 22:15:49.392329418 +0000 UTC m=+447.125182569 (delta=118.483346ms)
	I0717 22:15:49.519403   37994 fix.go:190] guest clock delta is within tolerance: 118.483346ms
	I0717 22:15:49.519408   37994 start.go:83] releasing machines lock for "multinode-009530-m02", held for 1m31.508669979s
	I0717 22:15:49.519430   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:15:49.519678   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetIP
	I0717 22:15:49.522629   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:15:49.523078   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:15:49.523116   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:15:49.525594   37994 out.go:177] * Found network options:
	I0717 22:15:49.527581   37994 out.go:177]   - NO_PROXY=192.168.39.222
	W0717 22:15:49.529448   37994 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 22:15:49.529500   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:15:49.530222   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:15:49.530493   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:15:49.530574   37994 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:15:49.530608   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	W0717 22:15:49.530706   37994 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 22:15:49.530787   37994 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:15:49.530811   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:15:49.533535   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:15:49.533826   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:15:49.534038   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:15:49.534068   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:15:49.534215   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:15:49.534238   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:15:49.534280   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:15:49.534425   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:15:49.534432   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:15:49.534630   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:15:49.534633   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:15:49.534775   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:15:49.534841   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/id_rsa Username:docker}
	I0717 22:15:49.534921   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/id_rsa Username:docker}
	I0717 22:15:49.775897   37994 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 22:15:49.776025   37994 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:15:49.781935   37994 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 22:15:49.782027   37994 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:15:49.782089   37994 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:15:49.790548   37994 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 22:15:49.790570   37994 start.go:466] detecting cgroup driver to use...
	I0717 22:15:49.790633   37994 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:15:49.804177   37994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:15:49.816676   37994 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:15:49.816737   37994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:15:49.830127   37994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:15:49.842982   37994 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:15:49.967327   37994 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:15:50.088421   37994 docker.go:212] disabling docker service ...
	I0717 22:15:50.088487   37994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:15:50.102701   37994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:15:50.114953   37994 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:15:50.232309   37994 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:15:50.348426   37994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:15:50.361434   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:15:50.379086   37994 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 22:15:50.379448   37994 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:15:50.379527   37994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:15:50.389050   37994 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:15:50.389113   37994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:15:50.398369   37994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:15:50.408520   37994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:15:50.418107   37994 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:15:50.428104   37994 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:15:50.436850   37994 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 22:15:50.437003   37994 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:15:50.445495   37994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:15:50.586374   37994 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:15:50.830460   37994 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:15:50.830533   37994 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:15:50.836029   37994 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 22:15:50.836053   37994 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 22:15:50.836062   37994 command_runner.go:130] > Device: 16h/22d	Inode: 1196        Links: 1
	I0717 22:15:50.836072   37994 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:15:50.836080   37994 command_runner.go:130] > Access: 2023-07-17 22:15:50.743821184 +0000
	I0717 22:15:50.836088   37994 command_runner.go:130] > Modify: 2023-07-17 22:15:50.743821184 +0000
	I0717 22:15:50.836096   37994 command_runner.go:130] > Change: 2023-07-17 22:15:50.743821184 +0000
	I0717 22:15:50.836101   37994 command_runner.go:130] >  Birth: -
	I0717 22:15:50.836407   37994 start.go:534] Will wait 60s for crictl version
	I0717 22:15:50.836467   37994 ssh_runner.go:195] Run: which crictl
	I0717 22:15:50.840127   37994 command_runner.go:130] > /usr/bin/crictl
	I0717 22:15:50.840240   37994 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:15:50.875266   37994 command_runner.go:130] > Version:  0.1.0
	I0717 22:15:50.875287   37994 command_runner.go:130] > RuntimeName:  cri-o
	I0717 22:15:50.875294   37994 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0717 22:15:50.875303   37994 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0717 22:15:50.875324   37994 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:15:50.875389   37994 ssh_runner.go:195] Run: crio --version
	I0717 22:15:50.929269   37994 command_runner.go:130] > crio version 1.24.1
	I0717 22:15:50.929294   37994 command_runner.go:130] > Version:          1.24.1
	I0717 22:15:50.929304   37994 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 22:15:50.929310   37994 command_runner.go:130] > GitTreeState:     dirty
	I0717 22:15:50.929317   37994 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 22:15:50.929323   37994 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 22:15:50.929329   37994 command_runner.go:130] > Compiler:         gc
	I0717 22:15:50.929335   37994 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:15:50.929350   37994 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:15:50.929366   37994 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:15:50.929375   37994 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:15:50.929384   37994 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:15:50.930714   37994 ssh_runner.go:195] Run: crio --version
	I0717 22:15:50.980868   37994 command_runner.go:130] > crio version 1.24.1
	I0717 22:15:50.980890   37994 command_runner.go:130] > Version:          1.24.1
	I0717 22:15:50.980900   37994 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 22:15:50.980906   37994 command_runner.go:130] > GitTreeState:     dirty
	I0717 22:15:50.980913   37994 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 22:15:50.980920   37994 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 22:15:50.980926   37994 command_runner.go:130] > Compiler:         gc
	I0717 22:15:50.980932   37994 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:15:50.980939   37994 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:15:50.980972   37994 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:15:50.980981   37994 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:15:50.980989   37994 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:15:50.982929   37994 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:15:50.984551   37994 out.go:177]   - env NO_PROXY=192.168.39.222
	I0717 22:15:50.986003   37994 main.go:141] libmachine: (multinode-009530-m02) Calling .GetIP
	I0717 22:15:50.988743   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:15:50.989115   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:15:50.989147   37994 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:15:50.989329   37994 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 22:15:50.993551   37994 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0717 22:15:50.993882   37994 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530 for IP: 192.168.39.146
	I0717 22:15:50.993905   37994 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:15:50.994019   37994 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:15:50.994063   37994 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:15:50.994077   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 22:15:50.994091   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 22:15:50.994103   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 22:15:50.994115   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 22:15:50.994159   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:15:50.994186   37994 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:15:50.994197   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:15:50.994218   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:15:50.994239   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:15:50.994260   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:15:50.994296   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:15:50.994322   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> /usr/share/ca-certificates/229902.pem
	I0717 22:15:50.994335   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:15:50.994346   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem -> /usr/share/ca-certificates/22990.pem
	I0717 22:15:50.994631   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:15:51.019037   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:15:51.046611   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:15:51.072276   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:15:51.095625   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:15:51.118346   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:15:51.140973   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:15:51.163516   37994 ssh_runner.go:195] Run: openssl version
	I0717 22:15:51.169606   37994 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0717 22:15:51.169968   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:15:51.180717   37994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:15:51.185310   37994 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:15:51.185357   37994 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:15:51.185397   37994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:15:51.190933   37994 command_runner.go:130] > 3ec20f2e
	I0717 22:15:51.190985   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:15:51.200380   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:15:51.211018   37994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:15:51.215326   37994 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:15:51.215474   37994 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:15:51.215531   37994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:15:51.221064   37994 command_runner.go:130] > b5213941
	I0717 22:15:51.221120   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:15:51.230725   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:15:51.241761   37994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:15:51.246239   37994 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:15:51.246320   37994 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:15:51.246369   37994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:15:51.251857   37994 command_runner.go:130] > 51391683
	I0717 22:15:51.251931   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:15:51.261251   37994 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:15:51.265168   37994 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:15:51.265210   37994 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:15:51.265320   37994 ssh_runner.go:195] Run: crio config
	I0717 22:15:51.319647   37994 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 22:15:51.319677   37994 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 22:15:51.319686   37994 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 22:15:51.319691   37994 command_runner.go:130] > #
	I0717 22:15:51.319703   37994 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 22:15:51.319713   37994 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 22:15:51.319723   37994 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 22:15:51.319734   37994 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 22:15:51.319739   37994 command_runner.go:130] > # reload'.
	I0717 22:15:51.319748   37994 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 22:15:51.319763   37994 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 22:15:51.319774   37994 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 22:15:51.319791   37994 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 22:15:51.319796   37994 command_runner.go:130] > [crio]
	I0717 22:15:51.319809   37994 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 22:15:51.319820   37994 command_runner.go:130] > # containers images, in this directory.
	I0717 22:15:51.319831   37994 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 22:15:51.319847   37994 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 22:15:51.319859   37994 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 22:15:51.319869   37994 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 22:15:51.319882   37994 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 22:15:51.319893   37994 command_runner.go:130] > storage_driver = "overlay"
	I0717 22:15:51.319908   37994 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 22:15:51.319921   37994 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 22:15:51.319931   37994 command_runner.go:130] > storage_option = [
	I0717 22:15:51.319973   37994 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 22:15:51.319985   37994 command_runner.go:130] > ]
	I0717 22:15:51.319993   37994 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 22:15:51.320001   37994 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 22:15:51.320009   37994 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 22:15:51.320017   37994 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 22:15:51.320030   37994 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 22:15:51.320041   37994 command_runner.go:130] > # always happen on a node reboot
	I0717 22:15:51.320049   37994 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 22:15:51.320061   37994 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 22:15:51.320073   37994 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 22:15:51.320088   37994 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 22:15:51.320099   37994 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 22:15:51.320113   37994 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 22:15:51.320130   37994 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 22:15:51.320140   37994 command_runner.go:130] > # internal_wipe = true
	I0717 22:15:51.320149   37994 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 22:15:51.320162   37994 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 22:15:51.320175   37994 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 22:15:51.320183   37994 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 22:15:51.320196   37994 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 22:15:51.320203   37994 command_runner.go:130] > [crio.api]
	I0717 22:15:51.320215   37994 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 22:15:51.320226   37994 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 22:15:51.320238   37994 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 22:15:51.320248   37994 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 22:15:51.320257   37994 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 22:15:51.320280   37994 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 22:15:51.320290   37994 command_runner.go:130] > # stream_port = "0"
	I0717 22:15:51.320299   37994 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 22:15:51.320309   37994 command_runner.go:130] > # stream_enable_tls = false
	I0717 22:15:51.320319   37994 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 22:15:51.320330   37994 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 22:15:51.320343   37994 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 22:15:51.320355   37994 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 22:15:51.320362   37994 command_runner.go:130] > # minutes.
	I0717 22:15:51.320372   37994 command_runner.go:130] > # stream_tls_cert = ""
	I0717 22:15:51.320384   37994 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 22:15:51.320397   37994 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 22:15:51.320435   37994 command_runner.go:130] > # stream_tls_key = ""
	I0717 22:15:51.320449   37994 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 22:15:51.320461   37994 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 22:15:51.320470   37994 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 22:15:51.320480   37994 command_runner.go:130] > # stream_tls_ca = ""
	I0717 22:15:51.320495   37994 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:15:51.320506   37994 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 22:15:51.320518   37994 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:15:51.320529   37994 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 22:15:51.320547   37994 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 22:15:51.320559   37994 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 22:15:51.320565   37994 command_runner.go:130] > [crio.runtime]
	I0717 22:15:51.320578   37994 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 22:15:51.320589   37994 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 22:15:51.320597   37994 command_runner.go:130] > # "nofile=1024:2048"
	I0717 22:15:51.320611   37994 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 22:15:51.320621   37994 command_runner.go:130] > # default_ulimits = [
	I0717 22:15:51.320639   37994 command_runner.go:130] > # ]
	I0717 22:15:51.320648   37994 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 22:15:51.320658   37994 command_runner.go:130] > # no_pivot = false
	I0717 22:15:51.320670   37994 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 22:15:51.320683   37994 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 22:15:51.320694   37994 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 22:15:51.320706   37994 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 22:15:51.320717   37994 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 22:15:51.320728   37994 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:15:51.320739   37994 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 22:15:51.320747   37994 command_runner.go:130] > # Cgroup setting for conmon
	I0717 22:15:51.320763   37994 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 22:15:51.320774   37994 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 22:15:51.320784   37994 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 22:15:51.320796   37994 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 22:15:51.320809   37994 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:15:51.320820   37994 command_runner.go:130] > conmon_env = [
	I0717 22:15:51.320831   37994 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 22:15:51.320839   37994 command_runner.go:130] > ]
	I0717 22:15:51.320849   37994 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 22:15:51.320860   37994 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 22:15:51.320871   37994 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 22:15:51.320880   37994 command_runner.go:130] > # default_env = [
	I0717 22:15:51.320885   37994 command_runner.go:130] > # ]
	I0717 22:15:51.320898   37994 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 22:15:51.320907   37994 command_runner.go:130] > # selinux = false
	I0717 22:15:51.320918   37994 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 22:15:51.320930   37994 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 22:15:51.320940   37994 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 22:15:51.320950   37994 command_runner.go:130] > # seccomp_profile = ""
	I0717 22:15:51.320959   37994 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 22:15:51.320971   37994 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 22:15:51.320983   37994 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 22:15:51.320994   37994 command_runner.go:130] > # which might increase security.
	I0717 22:15:51.321002   37994 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 22:15:51.321027   37994 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 22:15:51.321042   37994 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 22:15:51.321057   37994 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 22:15:51.321071   37994 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 22:15:51.321083   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:15:51.321094   37994 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 22:15:51.321103   37994 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 22:15:51.321113   37994 command_runner.go:130] > # the cgroup blockio controller.
	I0717 22:15:51.321119   37994 command_runner.go:130] > # blockio_config_file = ""
	I0717 22:15:51.321130   37994 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 22:15:51.321138   37994 command_runner.go:130] > # irqbalance daemon.
	I0717 22:15:51.321145   37994 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 22:15:51.321156   37994 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 22:15:51.321167   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:15:51.321177   37994 command_runner.go:130] > # rdt_config_file = ""
	I0717 22:15:51.321189   37994 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 22:15:51.321197   37994 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 22:15:51.321210   37994 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 22:15:51.321251   37994 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 22:15:51.321271   37994 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 22:15:51.321285   37994 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 22:15:51.321291   37994 command_runner.go:130] > # will be added.
	I0717 22:15:51.321302   37994 command_runner.go:130] > # default_capabilities = [
	I0717 22:15:51.321313   37994 command_runner.go:130] > # 	"CHOWN",
	I0717 22:15:51.321321   37994 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 22:15:51.321330   37994 command_runner.go:130] > # 	"FSETID",
	I0717 22:15:51.321337   37994 command_runner.go:130] > # 	"FOWNER",
	I0717 22:15:51.321346   37994 command_runner.go:130] > # 	"SETGID",
	I0717 22:15:51.321352   37994 command_runner.go:130] > # 	"SETUID",
	I0717 22:15:51.321362   37994 command_runner.go:130] > # 	"SETPCAP",
	I0717 22:15:51.321371   37994 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 22:15:51.321378   37994 command_runner.go:130] > # 	"KILL",
	I0717 22:15:51.321389   37994 command_runner.go:130] > # ]
	I0717 22:15:51.321401   37994 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 22:15:51.321414   37994 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:15:51.321423   37994 command_runner.go:130] > # default_sysctls = [
	I0717 22:15:51.321434   37994 command_runner.go:130] > # ]
	I0717 22:15:51.321445   37994 command_runner.go:130] > # List of devices on the host that a
	I0717 22:15:51.321459   37994 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 22:15:51.321468   37994 command_runner.go:130] > # allowed_devices = [
	I0717 22:15:51.321474   37994 command_runner.go:130] > # 	"/dev/fuse",
	I0717 22:15:51.321482   37994 command_runner.go:130] > # ]
	I0717 22:15:51.321490   37994 command_runner.go:130] > # List of additional devices. specified as
	I0717 22:15:51.321505   37994 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 22:15:51.321528   37994 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 22:15:51.321554   37994 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:15:51.321564   37994 command_runner.go:130] > # additional_devices = [
	I0717 22:15:51.321573   37994 command_runner.go:130] > # ]
	I0717 22:15:51.321583   37994 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 22:15:51.321592   37994 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 22:15:51.321601   37994 command_runner.go:130] > # 	"/etc/cdi",
	I0717 22:15:51.321610   37994 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 22:15:51.321618   37994 command_runner.go:130] > # ]
	I0717 22:15:51.321632   37994 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 22:15:51.321646   37994 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 22:15:51.321655   37994 command_runner.go:130] > # Defaults to false.
	I0717 22:15:51.321666   37994 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 22:15:51.321679   37994 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 22:15:51.321691   37994 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 22:15:51.321701   37994 command_runner.go:130] > # hooks_dir = [
	I0717 22:15:51.321712   37994 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 22:15:51.321721   37994 command_runner.go:130] > # ]
	I0717 22:15:51.321733   37994 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 22:15:51.321747   37994 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 22:15:51.321759   37994 command_runner.go:130] > # its default mounts from the following two files:
	I0717 22:15:51.321768   37994 command_runner.go:130] > #
	I0717 22:15:51.321778   37994 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 22:15:51.321791   37994 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 22:15:51.321803   37994 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 22:15:51.321809   37994 command_runner.go:130] > #
	I0717 22:15:51.321819   37994 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 22:15:51.321833   37994 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 22:15:51.321846   37994 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 22:15:51.321859   37994 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 22:15:51.321867   37994 command_runner.go:130] > #
	I0717 22:15:51.321875   37994 command_runner.go:130] > # default_mounts_file = ""
	I0717 22:15:51.321887   37994 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 22:15:51.321901   37994 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 22:15:51.321910   37994 command_runner.go:130] > pids_limit = 1024
	I0717 22:15:51.321920   37994 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 22:15:51.321933   37994 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 22:15:51.321946   37994 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 22:15:51.321962   37994 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 22:15:51.321971   37994 command_runner.go:130] > # log_size_max = -1
	I0717 22:15:51.321983   37994 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 22:15:51.321993   37994 command_runner.go:130] > # log_to_journald = false
	I0717 22:15:51.322003   37994 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 22:15:51.322014   37994 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 22:15:51.322026   37994 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 22:15:51.322037   37994 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 22:15:51.322051   37994 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 22:15:51.322061   37994 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 22:15:51.322073   37994 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 22:15:51.322084   37994 command_runner.go:130] > # read_only = false
	I0717 22:15:51.322099   37994 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 22:15:51.322112   37994 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 22:15:51.322122   37994 command_runner.go:130] > # live configuration reload.
	I0717 22:15:51.322132   37994 command_runner.go:130] > # log_level = "info"
	I0717 22:15:51.322141   37994 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 22:15:51.322154   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:15:51.322161   37994 command_runner.go:130] > # log_filter = ""
	I0717 22:15:51.322174   37994 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 22:15:51.322186   37994 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 22:15:51.322193   37994 command_runner.go:130] > # separated by comma.
	I0717 22:15:51.322235   37994 command_runner.go:130] > # uid_mappings = ""
	I0717 22:15:51.322248   37994 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 22:15:51.322263   37994 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 22:15:51.322274   37994 command_runner.go:130] > # separated by comma.
	I0717 22:15:51.322283   37994 command_runner.go:130] > # gid_mappings = ""
	I0717 22:15:51.322296   37994 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 22:15:51.322309   37994 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:15:51.322321   37994 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:15:51.322331   37994 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 22:15:51.322343   37994 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 22:15:51.322355   37994 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:15:51.322368   37994 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:15:51.322378   37994 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 22:15:51.322392   37994 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 22:15:51.322404   37994 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 22:15:51.322416   37994 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 22:15:51.322426   37994 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 22:15:51.322438   37994 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 22:15:51.322451   37994 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 22:15:51.322463   37994 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 22:15:51.322476   37994 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 22:15:51.322486   37994 command_runner.go:130] > drop_infra_ctr = false
	I0717 22:15:51.322498   37994 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 22:15:51.322511   37994 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 22:15:51.322526   37994 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 22:15:51.322535   37994 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 22:15:51.322545   37994 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 22:15:51.322556   37994 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 22:15:51.322566   37994 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 22:15:51.322580   37994 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 22:15:51.322588   37994 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 22:15:51.322597   37994 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 22:15:51.322608   37994 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 22:15:51.322619   37994 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 22:15:51.322627   37994 command_runner.go:130] > # default_runtime = "runc"
	I0717 22:15:51.322634   37994 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 22:15:51.322647   37994 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 22:15:51.322664   37994 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 22:15:51.322674   37994 command_runner.go:130] > # creation as a file is not desired either.
	I0717 22:15:51.322690   37994 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 22:15:51.322701   37994 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 22:15:51.322708   37994 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 22:15:51.322716   37994 command_runner.go:130] > # ]
	I0717 22:15:51.322727   37994 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 22:15:51.322740   37994 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 22:15:51.322756   37994 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 22:15:51.322769   37994 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 22:15:51.322778   37994 command_runner.go:130] > #
	I0717 22:15:51.322789   37994 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 22:15:51.322800   37994 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 22:15:51.322810   37994 command_runner.go:130] > #  runtime_type = "oci"
	I0717 22:15:51.322821   37994 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 22:15:51.322832   37994 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 22:15:51.322841   37994 command_runner.go:130] > #  allowed_annotations = []
	I0717 22:15:51.322850   37994 command_runner.go:130] > # Where:
	I0717 22:15:51.322862   37994 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 22:15:51.322874   37994 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 22:15:51.322888   37994 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 22:15:51.322904   37994 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 22:15:51.322914   37994 command_runner.go:130] > #   in $PATH.
	I0717 22:15:51.322928   37994 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 22:15:51.322939   37994 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 22:15:51.322953   37994 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 22:15:51.322963   37994 command_runner.go:130] > #   state.
	I0717 22:15:51.322977   37994 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 22:15:51.322990   37994 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 22:15:51.323005   37994 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 22:15:51.323017   37994 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 22:15:51.323031   37994 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 22:15:51.323046   37994 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 22:15:51.323090   37994 command_runner.go:130] > #   The currently recognized values are:
	I0717 22:15:51.323104   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 22:15:51.323117   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 22:15:51.323130   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 22:15:51.323142   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 22:15:51.323155   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 22:15:51.323167   37994 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 22:15:51.323180   37994 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 22:15:51.323193   37994 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 22:15:51.323204   37994 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 22:15:51.323214   37994 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 22:15:51.323224   37994 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 22:15:51.323234   37994 command_runner.go:130] > runtime_type = "oci"
	I0717 22:15:51.323244   37994 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 22:15:51.323254   37994 command_runner.go:130] > runtime_config_path = ""
	I0717 22:15:51.323266   37994 command_runner.go:130] > monitor_path = ""
	I0717 22:15:51.323277   37994 command_runner.go:130] > monitor_cgroup = ""
	I0717 22:15:51.323287   37994 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 22:15:51.323300   37994 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 22:15:51.323310   37994 command_runner.go:130] > # running containers
	I0717 22:15:51.323319   37994 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 22:15:51.323333   37994 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 22:15:51.323374   37994 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 22:15:51.323386   37994 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 22:15:51.323398   37994 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 22:15:51.323408   37994 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 22:15:51.323418   37994 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 22:15:51.323428   37994 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 22:15:51.323438   37994 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 22:15:51.323447   37994 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 22:15:51.323459   37994 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 22:15:51.323469   37994 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 22:15:51.323482   37994 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 22:15:51.323495   37994 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 22:15:51.323510   37994 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 22:15:51.323521   37994 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 22:15:51.323538   37994 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 22:15:51.323553   37994 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 22:15:51.323565   37994 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 22:15:51.323579   37994 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 22:15:51.323589   37994 command_runner.go:130] > # Example:
	I0717 22:15:51.323598   37994 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 22:15:51.323609   37994 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 22:15:51.323616   37994 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 22:15:51.323627   37994 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 22:15:51.323632   37994 command_runner.go:130] > # cpuset = 0
	I0717 22:15:51.323641   37994 command_runner.go:130] > # cpushares = "0-1"
	I0717 22:15:51.323649   37994 command_runner.go:130] > # Where:
	I0717 22:15:51.323659   37994 command_runner.go:130] > # The workload name is workload-type.
	I0717 22:15:51.323673   37994 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 22:15:51.323681   37994 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 22:15:51.323691   37994 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 22:15:51.323704   37994 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 22:15:51.323716   37994 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 22:15:51.323725   37994 command_runner.go:130] > # 
	I0717 22:15:51.323736   37994 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 22:15:51.323744   37994 command_runner.go:130] > #
	I0717 22:15:51.323754   37994 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 22:15:51.323767   37994 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 22:15:51.323780   37994 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 22:15:51.323795   37994 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 22:15:51.323807   37994 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 22:15:51.323813   37994 command_runner.go:130] > [crio.image]
	I0717 22:15:51.323826   37994 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 22:15:51.323834   37994 command_runner.go:130] > # default_transport = "docker://"
	I0717 22:15:51.323847   37994 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 22:15:51.323860   37994 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:15:51.323869   37994 command_runner.go:130] > # global_auth_file = ""
	I0717 22:15:51.323880   37994 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 22:15:51.323890   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:15:51.323929   37994 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 22:15:51.323942   37994 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 22:15:51.323954   37994 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:15:51.323964   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:15:51.323974   37994 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 22:15:51.323987   37994 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 22:15:51.323999   37994 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 22:15:51.324012   37994 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 22:15:51.324025   37994 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 22:15:51.324035   37994 command_runner.go:130] > # pause_command = "/pause"
	I0717 22:15:51.324047   37994 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 22:15:51.324059   37994 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 22:15:51.324071   37994 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 22:15:51.324081   37994 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 22:15:51.324093   37994 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 22:15:51.324101   37994 command_runner.go:130] > # signature_policy = ""
	I0717 22:15:51.324114   37994 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 22:15:51.324130   37994 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 22:15:51.324139   37994 command_runner.go:130] > # changing them here.
	I0717 22:15:51.324148   37994 command_runner.go:130] > # insecure_registries = [
	I0717 22:15:51.324156   37994 command_runner.go:130] > # ]
	I0717 22:15:51.324165   37994 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 22:15:51.324178   37994 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 22:15:51.324187   37994 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 22:15:51.324198   37994 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 22:15:51.324208   37994 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 22:15:51.324222   37994 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 22:15:51.324231   37994 command_runner.go:130] > # CNI plugins.
	I0717 22:15:51.324239   37994 command_runner.go:130] > [crio.network]
	I0717 22:15:51.324252   37994 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 22:15:51.324269   37994 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 22:15:51.324279   37994 command_runner.go:130] > # cni_default_network = ""
	I0717 22:15:51.324291   37994 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 22:15:51.324301   37994 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 22:15:51.324310   37994 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 22:15:51.324319   37994 command_runner.go:130] > # plugin_dirs = [
	I0717 22:15:51.324326   37994 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 22:15:51.324332   37994 command_runner.go:130] > # ]
	I0717 22:15:51.324344   37994 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 22:15:51.324355   37994 command_runner.go:130] > [crio.metrics]
	I0717 22:15:51.324364   37994 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 22:15:51.324374   37994 command_runner.go:130] > enable_metrics = true
	I0717 22:15:51.324384   37994 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 22:15:51.324395   37994 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 22:15:51.324413   37994 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 22:15:51.324425   37994 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 22:15:51.324436   37994 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 22:15:51.324444   37994 command_runner.go:130] > # metrics_collectors = [
	I0717 22:15:51.324453   37994 command_runner.go:130] > # 	"operations",
	I0717 22:15:51.324464   37994 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 22:15:51.324474   37994 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 22:15:51.324485   37994 command_runner.go:130] > # 	"operations_errors",
	I0717 22:15:51.324494   37994 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 22:15:51.324502   37994 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 22:15:51.324511   37994 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 22:15:51.324522   37994 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 22:15:51.324531   37994 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 22:15:51.324538   37994 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 22:15:51.324548   37994 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 22:15:51.324557   37994 command_runner.go:130] > # 	"containers_oom_total",
	I0717 22:15:51.324566   37994 command_runner.go:130] > # 	"containers_oom",
	I0717 22:15:51.324576   37994 command_runner.go:130] > # 	"processes_defunct",
	I0717 22:15:51.324586   37994 command_runner.go:130] > # 	"operations_total",
	I0717 22:15:51.324596   37994 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 22:15:51.324607   37994 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 22:15:51.324617   37994 command_runner.go:130] > # 	"operations_errors_total",
	I0717 22:15:51.324626   37994 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 22:15:51.324634   37994 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 22:15:51.324646   37994 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 22:15:51.324657   37994 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 22:15:51.324668   37994 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 22:15:51.324678   37994 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 22:15:51.324684   37994 command_runner.go:130] > # ]
	I0717 22:15:51.324693   37994 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 22:15:51.324701   37994 command_runner.go:130] > # metrics_port = 9090
	I0717 22:15:51.324709   37994 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 22:15:51.324719   37994 command_runner.go:130] > # metrics_socket = ""
	I0717 22:15:51.324731   37994 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 22:15:51.324744   37994 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 22:15:51.324756   37994 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 22:15:51.324767   37994 command_runner.go:130] > # certificate on any modification event.
	I0717 22:15:51.324778   37994 command_runner.go:130] > # metrics_cert = ""
	I0717 22:15:51.324788   37994 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 22:15:51.324798   37994 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 22:15:51.324807   37994 command_runner.go:130] > # metrics_key = ""
	I0717 22:15:51.324818   37994 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 22:15:51.324826   37994 command_runner.go:130] > [crio.tracing]
	I0717 22:15:51.324837   37994 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 22:15:51.324847   37994 command_runner.go:130] > # enable_tracing = false
	I0717 22:15:51.324859   37994 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 22:15:51.324870   37994 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 22:15:51.324883   37994 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 22:15:51.324893   37994 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 22:15:51.324902   37994 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 22:15:51.324911   37994 command_runner.go:130] > [crio.stats]
	I0717 22:15:51.324956   37994 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 22:15:51.324968   37994 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 22:15:51.324979   37994 command_runner.go:130] > # stats_collection_period = 0
	I0717 22:15:51.325051   37994 command_runner.go:130] ! time="2023-07-17 22:15:51.308889780Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0717 22:15:51.325075   37994 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 22:15:51.325335   37994 cni.go:84] Creating CNI manager for ""
	I0717 22:15:51.325354   37994 cni.go:137] 3 nodes found, recommending kindnet
	I0717 22:15:51.325367   37994 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:15:51.325395   37994 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.146 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-009530 NodeName:multinode-009530-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:15:51.325557   37994 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-009530-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:15:51.325624   37994 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-009530-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-009530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:15:51.325687   37994 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:15:51.335938   37994 command_runner.go:130] > kubeadm
	I0717 22:15:51.335952   37994 command_runner.go:130] > kubectl
	I0717 22:15:51.335956   37994 command_runner.go:130] > kubelet
	I0717 22:15:51.336012   37994 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:15:51.336079   37994 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 22:15:51.345302   37994 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0717 22:15:51.364254   37994 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:15:51.381623   37994 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0717 22:15:51.386018   37994 command_runner.go:130] > 192.168.39.222	control-plane.minikube.internal
	I0717 22:15:51.386065   37994 host.go:66] Checking if "multinode-009530" exists ...
	I0717 22:15:51.386308   37994 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:15:51.386379   37994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:15:51.386412   37994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:15:51.401100   37994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39285
	I0717 22:15:51.401557   37994 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:15:51.401996   37994 main.go:141] libmachine: Using API Version  1
	I0717 22:15:51.402020   37994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:15:51.402316   37994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:15:51.402522   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:15:51.402661   37994 start.go:301] JoinCluster: &{Name:multinode-009530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-009530 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.146 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.205 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0}
	I0717 22:15:51.402797   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 22:15:51.402816   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:15:51.405362   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:15:51.405773   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:15:51.405804   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:15:51.405911   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:15:51.406101   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:15:51.406259   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:15:51.406416   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:15:51.586378   37994 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token h845z2.5dbdmz49n3c6w11e --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:15:51.586436   37994 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.146 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 22:15:51.586468   37994 host.go:66] Checking if "multinode-009530" exists ...
	I0717 22:15:51.586916   37994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:15:51.586969   37994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:15:51.601697   37994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35717
	I0717 22:15:51.602130   37994 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:15:51.602550   37994 main.go:141] libmachine: Using API Version  1
	I0717 22:15:51.602569   37994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:15:51.602894   37994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:15:51.603096   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:15:51.603298   37994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-009530-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0717 22:15:51.603322   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:15:51.606005   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:15:51.606405   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:15:51.606444   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:15:51.606596   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:15:51.606790   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:15:51.606946   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:15:51.607080   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:15:51.793435   37994 command_runner.go:130] > node/multinode-009530-m02 cordoned
	I0717 22:15:54.846444   37994 command_runner.go:130] > pod "busybox-67b7f59bb-58859" has DeletionTimestamp older than 1 seconds, skipping
	I0717 22:15:54.846469   37994 command_runner.go:130] > node/multinode-009530-m02 drained
	I0717 22:15:54.848026   37994 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0717 22:15:54.848055   37994 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-4tb65, kube-system/kube-proxy-6rxv8
	I0717 22:15:54.848083   37994 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-009530-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.244759019s)
	I0717 22:15:54.848098   37994 node.go:108] successfully drained node "m02"
	I0717 22:15:54.848552   37994 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:15:54.848893   37994 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:15:54.849331   37994 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0717 22:15:54.849387   37994 round_trippers.go:463] DELETE https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:15:54.849400   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:54.849413   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:54.849427   37994 round_trippers.go:473]     Content-Type: application/json
	I0717 22:15:54.849438   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:54.863045   37994 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0717 22:15:54.863074   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:54.863085   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:54.863094   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:54.863102   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:54.863110   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:54.863119   37994 round_trippers.go:580]     Content-Length: 171
	I0717 22:15:54.863128   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:54 GMT
	I0717 22:15:54.863140   37994 round_trippers.go:580]     Audit-Id: 6d4a8d98-b7a4-407c-8bdc-d7f5b5ded5a8
	I0717 22:15:54.863163   37994 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-009530-m02","kind":"nodes","uid":"329b572e-b661-4301-b778-f37c0f69b53d"}}
	I0717 22:15:54.863223   37994 node.go:124] successfully deleted node "m02"
	I0717 22:15:54.863240   37994 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.146 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 22:15:54.863260   37994 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.146 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 22:15:54.863291   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h845z2.5dbdmz49n3c6w11e --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-009530-m02"
	I0717 22:15:54.920291   37994 command_runner.go:130] ! W0717 22:15:54.911648    2577 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0717 22:15:54.920354   37994 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0717 22:15:55.050026   37994 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0717 22:15:55.050078   37994 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0717 22:15:55.808362   37994 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 22:15:55.808396   37994 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0717 22:15:55.808417   37994 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0717 22:15:55.808426   37994 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:15:55.808434   37994 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:15:55.808448   37994 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 22:15:55.808457   37994 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0717 22:15:55.808465   37994 command_runner.go:130] > This node has joined the cluster:
	I0717 22:15:55.808480   37994 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0717 22:15:55.808493   37994 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0717 22:15:55.808508   37994 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0717 22:15:55.808534   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 22:15:56.075364   37994 start.go:303] JoinCluster complete in 4.672697881s
	I0717 22:15:56.075397   37994 cni.go:84] Creating CNI manager for ""
	I0717 22:15:56.075422   37994 cni.go:137] 3 nodes found, recommending kindnet
	I0717 22:15:56.075505   37994 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 22:15:56.081216   37994 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 22:15:56.081238   37994 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0717 22:15:56.081248   37994 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0717 22:15:56.081258   37994 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:15:56.081266   37994 command_runner.go:130] > Access: 2023-07-17 22:13:32.496064079 +0000
	I0717 22:15:56.081274   37994 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0717 22:15:56.081281   37994 command_runner.go:130] > Change: 2023-07-17 22:13:30.473064079 +0000
	I0717 22:15:56.081290   37994 command_runner.go:130] >  Birth: -
	I0717 22:15:56.081329   37994 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 22:15:56.081341   37994 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 22:15:56.099910   37994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 22:15:56.569915   37994 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 22:15:56.578151   37994 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 22:15:56.581955   37994 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 22:15:56.595257   37994 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 22:15:56.599046   37994 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:15:56.599245   37994 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:15:56.599545   37994 round_trippers.go:463] GET https://192.168.39.222:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 22:15:56.599564   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:56.599574   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:56.599581   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:56.601818   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:56.601840   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:56.601850   37994 round_trippers.go:580]     Audit-Id: d4148003-0aa8-4914-a419-baad58bed4eb
	I0717 22:15:56.601860   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:56.601866   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:56.601872   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:56.601878   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:56.601887   37994 round_trippers.go:580]     Content-Length: 291
	I0717 22:15:56.601892   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:56 GMT
	I0717 22:15:56.601914   37994 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c60c6831-559f-4b19-8b15-656b8972a35c","resourceVersion":"882","creationTimestamp":"2023-07-17T22:03:52Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0717 22:15:56.601999   37994 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-009530" context rescaled to 1 replicas
	I0717 22:15:56.602031   37994 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.146 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 22:15:56.604518   37994 out.go:177] * Verifying Kubernetes components...
	I0717 22:15:56.605682   37994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:15:56.619003   37994 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:15:56.619217   37994 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:15:56.619426   37994 node_ready.go:35] waiting up to 6m0s for node "multinode-009530-m02" to be "Ready" ...
	I0717 22:15:56.619480   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:15:56.619485   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:56.619492   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:56.619498   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:56.622420   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:56.622444   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:56.622454   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:56 GMT
	I0717 22:15:56.622463   37994 round_trippers.go:580]     Audit-Id: 626d60a7-3179-4f18-a893-9812f7a1bfd5
	I0717 22:15:56.622470   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:56.622479   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:56.622491   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:56.622501   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:56.622648   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"3aa87aa6-cbc0-42fe-abf1-386887aa827b","resourceVersion":"1013","creationTimestamp":"2023-07-17T22:15:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:15:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:15:55Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0717 22:15:56.622961   37994 node_ready.go:49] node "multinode-009530-m02" has status "Ready":"True"
	I0717 22:15:56.622977   37994 node_ready.go:38] duration metric: took 3.53654ms waiting for node "multinode-009530-m02" to be "Ready" ...
	I0717 22:15:56.622984   37994 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:15:56.623050   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:15:56.623061   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:56.623071   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:56.623078   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:56.626699   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:15:56.626715   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:56.626726   37994 round_trippers.go:580]     Audit-Id: 7c525bb8-7a5b-4b03-a3e2-dc88dcabf2dd
	I0717 22:15:56.626736   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:56.626744   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:56.626754   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:56.626762   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:56.626770   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:56 GMT
	I0717 22:15:56.628301   37994 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1022"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"866","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82249 chars]
	I0717 22:15:56.631115   37994 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:56.631179   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:15:56.631184   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:56.631191   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:56.631198   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:56.633458   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:56.633476   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:56.633485   37994 round_trippers.go:580]     Audit-Id: d1ef39fe-251d-441f-8c09-59fb096997ab
	I0717 22:15:56.633493   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:56.633501   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:56.633509   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:56.633537   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:56.633557   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:56 GMT
	I0717 22:15:56.633837   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"866","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0717 22:15:56.634204   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:15:56.634218   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:56.634240   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:56.634251   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:56.636274   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:56.636293   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:56.636302   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:56.636310   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:56.636324   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:56.636332   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:56.636345   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:56 GMT
	I0717 22:15:56.636357   37994 round_trippers.go:580]     Audit-Id: 74f38b90-3797-42fb-8dda-b8fb9b596926
	I0717 22:15:56.636481   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 22:15:56.636830   37994 pod_ready.go:92] pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:15:56.636847   37994 pod_ready.go:81] duration metric: took 5.711733ms waiting for pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:56.636860   37994 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:56.636919   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-009530
	I0717 22:15:56.636928   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:56.636935   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:56.636944   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:56.638998   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:56.639016   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:56.639026   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:56.639034   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:56.639042   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:56.639052   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:56 GMT
	I0717 22:15:56.639064   37994 round_trippers.go:580]     Audit-Id: 56eb0723-aecc-4e0b-9493-4c7c73c6b0e2
	I0717 22:15:56.639075   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:56.639172   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-009530","namespace":"kube-system","uid":"aed75ad9-0156-4275-8a41-b68d09c15660","resourceVersion":"857","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.222:2379","kubernetes.io/config.hash":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.mirror":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.seen":"2023-07-17T22:03:52.473671520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0717 22:15:56.639538   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:15:56.639553   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:56.639562   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:56.639569   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:56.641477   37994 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:15:56.641489   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:56.641495   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:56.641501   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:56.641506   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:56.641513   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:56 GMT
	I0717 22:15:56.641536   37994 round_trippers.go:580]     Audit-Id: 2ae7c3ff-0a4d-487a-9f38-c1220535c471
	I0717 22:15:56.641545   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:56.641872   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 22:15:56.642221   37994 pod_ready.go:92] pod "etcd-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:15:56.642237   37994 pod_ready.go:81] duration metric: took 5.367101ms waiting for pod "etcd-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:56.642250   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:56.642286   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-009530
	I0717 22:15:56.642291   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:56.642298   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:56.642307   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:56.644149   37994 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:15:56.644160   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:56.644168   37994 round_trippers.go:580]     Audit-Id: 076cc17d-107b-4790-881d-3c667f9f0d36
	I0717 22:15:56.644173   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:56.644178   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:56.644183   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:56.644191   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:56.644200   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:56 GMT
	I0717 22:15:56.644434   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-009530","namespace":"kube-system","uid":"958b1550-f15f-49f3-acf3-dbab69f82fb8","resourceVersion":"856","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.222:8443","kubernetes.io/config.hash":"49e7615bd1aa66d6e32161e120c48180","kubernetes.io/config.mirror":"49e7615bd1aa66d6e32161e120c48180","kubernetes.io/config.seen":"2023-07-17T22:03:52.473675304Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0717 22:15:56.644900   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:15:56.644917   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:56.644928   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:56.644938   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:56.646874   37994 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:15:56.646886   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:56.646892   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:56.646898   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:56 GMT
	I0717 22:15:56.646903   37994 round_trippers.go:580]     Audit-Id: a56f377f-f950-4ee0-97b7-e486b1ccd3c6
	I0717 22:15:56.646911   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:56.646917   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:56.646928   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:56.647102   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 22:15:56.647429   37994 pod_ready.go:92] pod "kube-apiserver-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:15:56.647443   37994 pod_ready.go:81] duration metric: took 5.186863ms waiting for pod "kube-apiserver-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:56.647454   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:56.647511   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-009530
	I0717 22:15:56.647521   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:56.647532   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:56.647545   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:56.649765   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:56.649777   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:56.649783   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:56.649788   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:56.649794   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:56.649802   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:56 GMT
	I0717 22:15:56.649814   37994 round_trippers.go:580]     Audit-Id: fbb52382-53cb-4d10-ac0b-85e6d2d34d07
	I0717 22:15:56.649823   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:56.650089   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-009530","namespace":"kube-system","uid":"1c9dba7c-6497-41f0-b751-17988278c710","resourceVersion":"864","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d8b61663949a18745a23bcf487c538f2","kubernetes.io/config.mirror":"d8b61663949a18745a23bcf487c538f2","kubernetes.io/config.seen":"2023-07-17T22:03:52.473676600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0717 22:15:56.650469   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:15:56.650481   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:56.650488   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:56.650495   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:56.652813   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:56.652824   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:56.652830   37994 round_trippers.go:580]     Audit-Id: 36500127-0da1-4991-b0ad-ab227f4928ee
	I0717 22:15:56.652836   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:56.652841   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:56.652846   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:56.652852   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:56.652857   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:56 GMT
	I0717 22:15:56.653019   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 22:15:56.653311   37994 pod_ready.go:92] pod "kube-controller-manager-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:15:56.653323   37994 pod_ready.go:81] duration metric: took 5.858403ms waiting for pod "kube-controller-manager-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:56.653331   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6rxv8" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:56.819529   37994 request.go:628] Waited for 166.138913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rxv8
	I0717 22:15:56.819586   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rxv8
	I0717 22:15:56.819591   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:56.819598   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:56.819609   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:56.822364   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:56.822389   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:56.822399   37994 round_trippers.go:580]     Audit-Id: a28499d0-030f-48c1-8ea5-752c448ef4a7
	I0717 22:15:56.822409   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:56.822416   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:56.822424   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:56.822437   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:56.822447   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:56 GMT
	I0717 22:15:56.822582   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6rxv8","generateName":"kube-proxy-","namespace":"kube-system","uid":"0d197eb7-b5bd-446a-b2f4-c513c06afcbe","resourceVersion":"1018","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0717 22:15:57.020561   37994 request.go:628] Waited for 197.404918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:15:57.020624   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:15:57.020631   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:57.020647   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:57.020659   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:57.023815   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:15:57.023833   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:57.023840   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:57.023846   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:57 GMT
	I0717 22:15:57.023851   37994 round_trippers.go:580]     Audit-Id: 4fc3395c-c7ff-4629-a60e-567776a583ba
	I0717 22:15:57.023857   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:57.023862   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:57.023867   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:57.024430   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"3aa87aa6-cbc0-42fe-abf1-386887aa827b","resourceVersion":"1013","creationTimestamp":"2023-07-17T22:15:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:15:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:15:55Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0717 22:15:57.525437   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rxv8
	I0717 22:15:57.525458   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:57.525466   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:57.525472   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:57.527992   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:57.528012   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:57.528019   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:57.528025   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:57.528031   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:57.528036   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:57.528042   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:57 GMT
	I0717 22:15:57.528047   37994 round_trippers.go:580]     Audit-Id: a31ca1de-9e2e-492e-9570-2358a393c8a7
	I0717 22:15:57.528248   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6rxv8","generateName":"kube-proxy-","namespace":"kube-system","uid":"0d197eb7-b5bd-446a-b2f4-c513c06afcbe","resourceVersion":"1031","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0717 22:15:57.528659   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:15:57.528672   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:57.528679   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:57.528685   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:57.530933   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:57.530947   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:57.530953   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:57.530959   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:57.530964   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:57 GMT
	I0717 22:15:57.530973   37994 round_trippers.go:580]     Audit-Id: 548c7b44-24a2-4e2b-b369-1617a2c7a5a1
	I0717 22:15:57.530982   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:57.530991   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:57.531507   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"3aa87aa6-cbc0-42fe-abf1-386887aa827b","resourceVersion":"1013","creationTimestamp":"2023-07-17T22:15:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:15:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:15:55Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0717 22:15:57.531779   37994 pod_ready.go:92] pod "kube-proxy-6rxv8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:15:57.531797   37994 pod_ready.go:81] duration metric: took 878.459267ms waiting for pod "kube-proxy-6rxv8" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:57.531809   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jv9h4" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:57.620121   37994 request.go:628] Waited for 88.256812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jv9h4
	I0717 22:15:57.620177   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jv9h4
	I0717 22:15:57.620181   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:57.620189   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:57.620196   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:57.622743   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:57.622771   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:57.622782   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:57.622789   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:57 GMT
	I0717 22:15:57.622795   37994 round_trippers.go:580]     Audit-Id: bf2a822e-e337-4c37-b5a9-9292c4fcb5da
	I0717 22:15:57.622800   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:57.622809   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:57.622815   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:57.622940   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jv9h4","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3b140d5-ec70-4ffe-8372-7fb67d0fb0c9","resourceVersion":"711","creationTimestamp":"2023-07-17T22:05:32Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:05:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0717 22:15:57.819638   37994 request.go:628] Waited for 196.304177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m03
	I0717 22:15:57.819714   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m03
	I0717 22:15:57.819721   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:57.819733   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:57.819741   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:57.823044   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:15:57.823064   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:57.823073   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:57.823081   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:57.823090   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:57.823097   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:57.823105   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:57 GMT
	I0717 22:15:57.823113   37994 round_trippers.go:580]     Audit-Id: 6102efb8-33c1-4b27-a501-0fcd18775c43
	I0717 22:15:57.823191   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m03","uid":"cadf8157-0bcb-4971-8496-da993f9c43bf","resourceVersion":"818","creationTimestamp":"2023-07-17T22:06:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:06:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0717 22:15:57.823435   37994 pod_ready.go:92] pod "kube-proxy-jv9h4" in "kube-system" namespace has status "Ready":"True"
	I0717 22:15:57.823449   37994 pod_ready.go:81] duration metric: took 291.632983ms waiting for pod "kube-proxy-jv9h4" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:57.823462   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m5spw" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:58.019936   37994 request.go:628] Waited for 196.40009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5spw
	I0717 22:15:58.019996   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5spw
	I0717 22:15:58.020004   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:58.020016   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:58.020027   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:58.023240   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:15:58.023267   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:58.023276   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:58.023284   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:58 GMT
	I0717 22:15:58.023291   37994 round_trippers.go:580]     Audit-Id: 337fab25-1ef8-4244-a2e1-c7f560eaa48b
	I0717 22:15:58.023298   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:58.023305   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:58.023312   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:58.023437   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m5spw","generateName":"kube-proxy-","namespace":"kube-system","uid":"a4bf0eb3-126a-463e-a670-b4793e1c5ec9","resourceVersion":"825","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 22:15:58.220152   37994 request.go:628] Waited for 196.306582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:15:58.220202   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:15:58.220207   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:58.220215   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:58.220221   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:58.222948   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:58.222967   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:58.222978   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:58.222986   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:58 GMT
	I0717 22:15:58.222994   37994 round_trippers.go:580]     Audit-Id: ba4a09a2-3d46-435a-a815-eb22b5ee413c
	I0717 22:15:58.223003   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:58.223014   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:58.223045   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:58.223376   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 22:15:58.223666   37994 pod_ready.go:92] pod "kube-proxy-m5spw" in "kube-system" namespace has status "Ready":"True"
	I0717 22:15:58.223679   37994 pod_ready.go:81] duration metric: took 400.21068ms waiting for pod "kube-proxy-m5spw" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:58.223688   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:58.420114   37994 request.go:628] Waited for 196.362746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-009530
	I0717 22:15:58.420184   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-009530
	I0717 22:15:58.420192   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:58.420202   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:58.420214   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:58.423113   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:58.423132   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:58.423138   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:58.423144   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:58.423149   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:58.423155   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:58 GMT
	I0717 22:15:58.423160   37994 round_trippers.go:580]     Audit-Id: 9216f856-4b39-4d98-bd4f-4c4c2a14786f
	I0717 22:15:58.423166   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:58.423311   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-009530","namespace":"kube-system","uid":"5da85194-923d-40f6-ab44-86209b1f057d","resourceVersion":"859","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"036d300e0ec7bf28a26e0c644008bbd5","kubernetes.io/config.mirror":"036d300e0ec7bf28a26e0c644008bbd5","kubernetes.io/config.seen":"2023-07-17T22:03:52.473677561Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0717 22:15:58.620004   37994 request.go:628] Waited for 196.352193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:15:58.620050   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:15:58.620056   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:58.620068   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:58.620082   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:58.623006   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:15:58.623031   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:58.623039   37994 round_trippers.go:580]     Audit-Id: 8de7b83d-e72b-4c92-b360-da7a37d35084
	I0717 22:15:58.623048   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:58.623058   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:58.623065   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:58.623073   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:58.623083   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:58 GMT
	I0717 22:15:58.623269   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 22:15:58.623554   37994 pod_ready.go:92] pod "kube-scheduler-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:15:58.623566   37994 pod_ready.go:81] duration metric: took 399.873074ms waiting for pod "kube-scheduler-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:15:58.623578   37994 pod_ready.go:38] duration metric: took 2.000584575s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:15:58.623591   37994 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:15:58.623630   37994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:15:58.639734   37994 system_svc.go:56] duration metric: took 16.137786ms WaitForService to wait for kubelet.
	I0717 22:15:58.639762   37994 kubeadm.go:581] duration metric: took 2.037706626s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:15:58.639779   37994 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:15:58.820178   37994 request.go:628] Waited for 180.342036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes
	I0717 22:15:58.820244   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes
	I0717 22:15:58.820249   37994 round_trippers.go:469] Request Headers:
	I0717 22:15:58.820257   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:15:58.820263   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:15:58.823531   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:15:58.823554   37994 round_trippers.go:577] Response Headers:
	I0717 22:15:58.823564   37994 round_trippers.go:580]     Audit-Id: ec0c2659-a5be-4bdc-b9d7-34cec3b04b39
	I0717 22:15:58.823573   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:15:58.823580   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:15:58.823588   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:15:58.823597   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:15:58.823606   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:15:58 GMT
	I0717 22:15:58.824106   37994 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1036"},"items":[{"metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15106 chars]
	I0717 22:15:58.824940   37994 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:15:58.824969   37994 node_conditions.go:123] node cpu capacity is 2
	I0717 22:15:58.824983   37994 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:15:58.824987   37994 node_conditions.go:123] node cpu capacity is 2
	I0717 22:15:58.824992   37994 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:15:58.824998   37994 node_conditions.go:123] node cpu capacity is 2
	I0717 22:15:58.825003   37994 node_conditions.go:105] duration metric: took 185.219139ms to run NodePressure ...
	I0717 22:15:58.825017   37994 start.go:228] waiting for startup goroutines ...
	I0717 22:15:58.825040   37994 start.go:242] writing updated cluster config ...
	I0717 22:15:58.825615   37994 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:15:58.825745   37994 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/config.json ...
	I0717 22:15:58.829017   37994 out.go:177] * Starting worker node multinode-009530-m03 in cluster multinode-009530
	I0717 22:15:58.830262   37994 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:15:58.830282   37994 cache.go:57] Caching tarball of preloaded images
	I0717 22:15:58.830368   37994 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 22:15:58.830378   37994 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 22:15:58.830504   37994 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/config.json ...
	I0717 22:15:58.830669   37994 start.go:365] acquiring machines lock for multinode-009530-m03: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:15:58.830709   37994 start.go:369] acquired machines lock for "multinode-009530-m03" in 23.217µs
	I0717 22:15:58.830722   37994 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:15:58.830729   37994 fix.go:54] fixHost starting: m03
	I0717 22:15:58.830962   37994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:15:58.830990   37994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:15:58.848003   37994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0717 22:15:58.848389   37994 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:15:58.848840   37994 main.go:141] libmachine: Using API Version  1
	I0717 22:15:58.848866   37994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:15:58.849248   37994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:15:58.849438   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .DriverName
	I0717 22:15:58.849614   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetState
	I0717 22:15:58.851079   37994 fix.go:102] recreateIfNeeded on multinode-009530-m03: state=Running err=<nil>
	W0717 22:15:58.851095   37994 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:15:58.853102   37994 out.go:177] * Updating the running kvm2 "multinode-009530-m03" VM ...
	I0717 22:15:58.854403   37994 machine.go:88] provisioning docker machine ...
	I0717 22:15:58.854422   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .DriverName
	I0717 22:15:58.854602   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetMachineName
	I0717 22:15:58.854770   37994 buildroot.go:166] provisioning hostname "multinode-009530-m03"
	I0717 22:15:58.854795   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetMachineName
	I0717 22:15:58.854927   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHHostname
	I0717 22:15:58.857139   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:15:58.857540   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:99:c6", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:06:04 +0000 UTC Type:0 Mac:52:54:00:a7:99:c6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-009530-m03 Clientid:01:52:54:00:a7:99:c6}
	I0717 22:15:58.857571   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:15:58.857683   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHPort
	I0717 22:15:58.857835   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHKeyPath
	I0717 22:15:58.857950   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHKeyPath
	I0717 22:15:58.858083   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHUsername
	I0717 22:15:58.858206   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:15:58.858583   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0717 22:15:58.858597   37994 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-009530-m03 && echo "multinode-009530-m03" | sudo tee /etc/hostname
	I0717 22:15:58.989302   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-009530-m03
	
	I0717 22:15:58.989327   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHHostname
	I0717 22:15:58.991954   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:15:58.992322   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:99:c6", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:06:04 +0000 UTC Type:0 Mac:52:54:00:a7:99:c6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-009530-m03 Clientid:01:52:54:00:a7:99:c6}
	I0717 22:15:58.992346   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:15:58.992621   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHPort
	I0717 22:15:58.992862   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHKeyPath
	I0717 22:15:58.993047   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHKeyPath
	I0717 22:15:58.993190   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHUsername
	I0717 22:15:58.993333   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:15:58.993806   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0717 22:15:58.993826   37994 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-009530-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-009530-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-009530-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:15:59.114423   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:15:59.114450   37994 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:15:59.114471   37994 buildroot.go:174] setting up certificates
	I0717 22:15:59.114481   37994 provision.go:83] configureAuth start
	I0717 22:15:59.114492   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetMachineName
	I0717 22:15:59.114857   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetIP
	I0717 22:15:59.117816   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:15:59.118200   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:99:c6", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:06:04 +0000 UTC Type:0 Mac:52:54:00:a7:99:c6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-009530-m03 Clientid:01:52:54:00:a7:99:c6}
	I0717 22:15:59.118225   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:15:59.118441   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHHostname
	I0717 22:15:59.120856   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:15:59.121217   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:99:c6", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:06:04 +0000 UTC Type:0 Mac:52:54:00:a7:99:c6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-009530-m03 Clientid:01:52:54:00:a7:99:c6}
	I0717 22:15:59.121247   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:15:59.121381   37994 provision.go:138] copyHostCerts
	I0717 22:15:59.121406   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:15:59.121431   37994 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:15:59.121439   37994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:15:59.121501   37994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:15:59.121590   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:15:59.121609   37994 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:15:59.121616   37994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:15:59.121642   37994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:15:59.121685   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:15:59.121702   37994 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:15:59.121706   37994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:15:59.121726   37994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:15:59.121778   37994 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.multinode-009530-m03 san=[192.168.39.205 192.168.39.205 localhost 127.0.0.1 minikube multinode-009530-m03]
	I0717 22:15:59.257497   37994 provision.go:172] copyRemoteCerts
	I0717 22:15:59.257563   37994 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:15:59.257584   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHHostname
	I0717 22:15:59.260003   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:15:59.260334   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:99:c6", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:06:04 +0000 UTC Type:0 Mac:52:54:00:a7:99:c6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-009530-m03 Clientid:01:52:54:00:a7:99:c6}
	I0717 22:15:59.260362   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:15:59.260517   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHPort
	I0717 22:15:59.260697   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHKeyPath
	I0717 22:15:59.260816   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHUsername
	I0717 22:15:59.260957   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m03/id_rsa Username:docker}
	I0717 22:15:59.347853   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 22:15:59.347938   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0717 22:15:59.375907   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 22:15:59.375975   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:15:59.399292   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 22:15:59.399366   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:15:59.425197   37994 provision.go:86] duration metric: configureAuth took 310.700448ms
	I0717 22:15:59.425222   37994 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:15:59.425424   37994 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:15:59.425505   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHHostname
	I0717 22:15:59.427955   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:15:59.428408   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:99:c6", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:06:04 +0000 UTC Type:0 Mac:52:54:00:a7:99:c6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-009530-m03 Clientid:01:52:54:00:a7:99:c6}
	I0717 22:15:59.428441   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:15:59.428637   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHPort
	I0717 22:15:59.428838   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHKeyPath
	I0717 22:15:59.428981   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHKeyPath
	I0717 22:15:59.429108   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHUsername
	I0717 22:15:59.429247   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:15:59.429676   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0717 22:15:59.429703   37994 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:17:30.057325   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:17:30.057346   37994 machine.go:91] provisioned docker machine in 1m31.202928991s
	I0717 22:17:30.057357   37994 start.go:300] post-start starting for "multinode-009530-m03" (driver="kvm2")
	I0717 22:17:30.057366   37994 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:17:30.057382   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .DriverName
	I0717 22:17:30.057696   37994 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:17:30.057715   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHHostname
	I0717 22:17:30.060560   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:17:30.060950   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:99:c6", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:06:04 +0000 UTC Type:0 Mac:52:54:00:a7:99:c6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-009530-m03 Clientid:01:52:54:00:a7:99:c6}
	I0717 22:17:30.060974   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:17:30.061164   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHPort
	I0717 22:17:30.061353   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHKeyPath
	I0717 22:17:30.061527   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHUsername
	I0717 22:17:30.061692   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m03/id_rsa Username:docker}
	I0717 22:17:30.147509   37994 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:17:30.152843   37994 command_runner.go:130] > NAME=Buildroot
	I0717 22:17:30.152867   37994 command_runner.go:130] > VERSION=2021.02.12-1-gf5d52c7-dirty
	I0717 22:17:30.152873   37994 command_runner.go:130] > ID=buildroot
	I0717 22:17:30.152879   37994 command_runner.go:130] > VERSION_ID=2021.02.12
	I0717 22:17:30.152883   37994 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0717 22:17:30.152917   37994 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:17:30.152933   37994 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:17:30.153000   37994 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:17:30.153066   37994 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:17:30.153074   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> /etc/ssl/certs/229902.pem
	I0717 22:17:30.153154   37994 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:17:30.161547   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:17:30.186737   37994 start.go:303] post-start completed in 129.3666ms
	I0717 22:17:30.186763   37994 fix.go:56] fixHost completed within 1m31.356034939s
	I0717 22:17:30.186782   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHHostname
	I0717 22:17:30.189552   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:17:30.189933   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:99:c6", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:06:04 +0000 UTC Type:0 Mac:52:54:00:a7:99:c6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-009530-m03 Clientid:01:52:54:00:a7:99:c6}
	I0717 22:17:30.189965   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:17:30.190166   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHPort
	I0717 22:17:30.190354   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHKeyPath
	I0717 22:17:30.190530   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHKeyPath
	I0717 22:17:30.190658   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHUsername
	I0717 22:17:30.190839   37994 main.go:141] libmachine: Using SSH client type: native
	I0717 22:17:30.191246   37994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0717 22:17:30.191260   37994 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:17:30.306447   37994 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689632250.298581204
	
	I0717 22:17:30.306473   37994 fix.go:206] guest clock: 1689632250.298581204
	I0717 22:17:30.306483   37994 fix.go:219] Guest: 2023-07-17 22:17:30.298581204 +0000 UTC Remote: 2023-07-17 22:17:30.186766967 +0000 UTC m=+547.919620118 (delta=111.814237ms)
	I0717 22:17:30.306502   37994 fix.go:190] guest clock delta is within tolerance: 111.814237ms
	I0717 22:17:30.306508   37994 start.go:83] releasing machines lock for "multinode-009530-m03", held for 1m31.475790394s
	I0717 22:17:30.306534   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .DriverName
	I0717 22:17:30.306840   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetIP
	I0717 22:17:30.309579   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:17:30.309951   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:99:c6", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:06:04 +0000 UTC Type:0 Mac:52:54:00:a7:99:c6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-009530-m03 Clientid:01:52:54:00:a7:99:c6}
	I0717 22:17:30.309981   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:17:30.312044   37994 out.go:177] * Found network options:
	I0717 22:17:30.313476   37994 out.go:177]   - NO_PROXY=192.168.39.222,192.168.39.146
	W0717 22:17:30.314833   37994 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 22:17:30.314853   37994 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 22:17:30.314866   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .DriverName
	I0717 22:17:30.315454   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .DriverName
	I0717 22:17:30.315635   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .DriverName
	I0717 22:17:30.315728   37994 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:17:30.315765   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHHostname
	W0717 22:17:30.315875   37994 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 22:17:30.315899   37994 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 22:17:30.315971   37994 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:17:30.315992   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHHostname
	I0717 22:17:30.318503   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:17:30.318870   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:99:c6", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:06:04 +0000 UTC Type:0 Mac:52:54:00:a7:99:c6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-009530-m03 Clientid:01:52:54:00:a7:99:c6}
	I0717 22:17:30.318897   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:17:30.318915   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:17:30.319045   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHPort
	I0717 22:17:30.319203   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHKeyPath
	I0717 22:17:30.319335   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHUsername
	I0717 22:17:30.319407   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:99:c6", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:06:04 +0000 UTC Type:0 Mac:52:54:00:a7:99:c6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-009530-m03 Clientid:01:52:54:00:a7:99:c6}
	I0717 22:17:30.319453   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:17:30.319459   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m03/id_rsa Username:docker}
	I0717 22:17:30.319616   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHPort
	I0717 22:17:30.319805   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHKeyPath
	I0717 22:17:30.319937   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetSSHUsername
	I0717 22:17:30.320057   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m03/id_rsa Username:docker}
	I0717 22:17:30.429267   37994 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 22:17:30.556322   37994 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:17:30.562813   37994 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 22:17:30.563004   37994 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:17:30.563070   37994 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:17:30.571687   37994 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 22:17:30.571705   37994 start.go:466] detecting cgroup driver to use...
	I0717 22:17:30.571771   37994 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:17:30.587057   37994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:17:30.600223   37994 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:17:30.600268   37994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:17:30.613678   37994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:17:30.626093   37994 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:17:30.772376   37994 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:17:30.912613   37994 docker.go:212] disabling docker service ...
	I0717 22:17:30.912674   37994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:17:30.926944   37994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:17:30.939141   37994 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:17:31.061905   37994 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:17:31.190384   37994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:17:31.204162   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:17:31.221364   37994 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 22:17:31.221591   37994 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:17:31.221659   37994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:17:31.231300   37994 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:17:31.231365   37994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:17:31.241729   37994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:17:31.253212   37994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:17:31.262578   37994 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:17:31.272488   37994 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:17:31.281842   37994 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 22:17:31.281914   37994 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:17:31.290583   37994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:17:31.410982   37994 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:17:31.660937   37994 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:17:31.660992   37994 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:17:31.666577   37994 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 22:17:31.666596   37994 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 22:17:31.666602   37994 command_runner.go:130] > Device: 16h/22d	Inode: 1268        Links: 1
	I0717 22:17:31.666609   37994 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:17:31.666614   37994 command_runner.go:130] > Access: 2023-07-17 22:17:31.582143899 +0000
	I0717 22:17:31.666619   37994 command_runner.go:130] > Modify: 2023-07-17 22:17:31.582143899 +0000
	I0717 22:17:31.666624   37994 command_runner.go:130] > Change: 2023-07-17 22:17:31.582143899 +0000
	I0717 22:17:31.666628   37994 command_runner.go:130] >  Birth: -
	I0717 22:17:31.666823   37994 start.go:534] Will wait 60s for crictl version
	I0717 22:17:31.666883   37994 ssh_runner.go:195] Run: which crictl
	I0717 22:17:31.671157   37994 command_runner.go:130] > /usr/bin/crictl
	I0717 22:17:31.671222   37994 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:17:31.706623   37994 command_runner.go:130] > Version:  0.1.0
	I0717 22:17:31.706649   37994 command_runner.go:130] > RuntimeName:  cri-o
	I0717 22:17:31.706657   37994 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0717 22:17:31.706665   37994 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0717 22:17:31.707968   37994 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:17:31.708042   37994 ssh_runner.go:195] Run: crio --version
	I0717 22:17:31.756460   37994 command_runner.go:130] > crio version 1.24.1
	I0717 22:17:31.756479   37994 command_runner.go:130] > Version:          1.24.1
	I0717 22:17:31.756488   37994 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 22:17:31.756494   37994 command_runner.go:130] > GitTreeState:     dirty
	I0717 22:17:31.756502   37994 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 22:17:31.756509   37994 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 22:17:31.756515   37994 command_runner.go:130] > Compiler:         gc
	I0717 22:17:31.756522   37994 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:17:31.756530   37994 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:17:31.756539   37994 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:17:31.756547   37994 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:17:31.756553   37994 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:17:31.758120   37994 ssh_runner.go:195] Run: crio --version
	I0717 22:17:31.807764   37994 command_runner.go:130] > crio version 1.24.1
	I0717 22:17:31.807789   37994 command_runner.go:130] > Version:          1.24.1
	I0717 22:17:31.807806   37994 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0717 22:17:31.807812   37994 command_runner.go:130] > GitTreeState:     dirty
	I0717 22:17:31.807821   37994 command_runner.go:130] > BuildDate:        2023-07-15T02:24:22Z
	I0717 22:17:31.807827   37994 command_runner.go:130] > GoVersion:        go1.19.9
	I0717 22:17:31.807832   37994 command_runner.go:130] > Compiler:         gc
	I0717 22:17:31.807839   37994 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:17:31.807847   37994 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:17:31.807858   37994 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:17:31.807864   37994 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:17:31.807871   37994 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:17:31.811498   37994 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:17:31.812978   37994 out.go:177]   - env NO_PROXY=192.168.39.222
	I0717 22:17:31.814278   37994 out.go:177]   - env NO_PROXY=192.168.39.222,192.168.39.146
	I0717 22:17:31.815642   37994 main.go:141] libmachine: (multinode-009530-m03) Calling .GetIP
	I0717 22:17:31.818055   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:17:31.818354   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:99:c6", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:06:04 +0000 UTC Type:0 Mac:52:54:00:a7:99:c6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:multinode-009530-m03 Clientid:01:52:54:00:a7:99:c6}
	I0717 22:17:31.818385   37994 main.go:141] libmachine: (multinode-009530-m03) DBG | domain multinode-009530-m03 has defined IP address 192.168.39.205 and MAC address 52:54:00:a7:99:c6 in network mk-multinode-009530
	I0717 22:17:31.818560   37994 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 22:17:31.822634   37994 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0717 22:17:31.823026   37994 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530 for IP: 192.168.39.205
	I0717 22:17:31.823055   37994 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:17:31.823172   37994 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:17:31.823208   37994 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:17:31.823220   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 22:17:31.823234   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 22:17:31.823248   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 22:17:31.823260   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 22:17:31.823303   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:17:31.823329   37994 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:17:31.823338   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:17:31.823360   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:17:31.823381   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:17:31.823404   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:17:31.823442   37994 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:17:31.823465   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:31.823477   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem -> /usr/share/ca-certificates/22990.pem
	I0717 22:17:31.823488   37994 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> /usr/share/ca-certificates/229902.pem
	I0717 22:17:31.823880   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:17:31.849509   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:17:31.874502   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:17:31.901934   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:17:31.928241   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:17:31.951930   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:17:31.975880   37994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:17:32.000279   37994 ssh_runner.go:195] Run: openssl version
	I0717 22:17:32.006069   37994 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0717 22:17:32.006160   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:17:32.016407   37994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:32.021274   37994 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:32.021349   37994 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:32.021413   37994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:32.027062   37994 command_runner.go:130] > b5213941
	I0717 22:17:32.027153   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:17:32.035636   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:17:32.045356   37994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:17:32.049966   37994 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:17:32.050029   37994 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:17:32.050074   37994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:17:32.055461   37994 command_runner.go:130] > 51391683
	I0717 22:17:32.055853   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:17:32.063932   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:17:32.073682   37994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:17:32.078114   37994 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:17:32.078365   37994 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:17:32.078415   37994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:17:32.083665   37994 command_runner.go:130] > 3ec20f2e
	I0717 22:17:32.084098   37994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:17:32.092260   37994 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:17:32.096270   37994 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:17:32.096338   37994 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:17:32.096429   37994 ssh_runner.go:195] Run: crio config
	I0717 22:17:32.148075   37994 command_runner.go:130] ! time="2023-07-17 22:17:32.140297558Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0717 22:17:32.148107   37994 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 22:17:32.155525   37994 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 22:17:32.155554   37994 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 22:17:32.155562   37994 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 22:17:32.155566   37994 command_runner.go:130] > #
	I0717 22:17:32.155573   37994 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 22:17:32.155580   37994 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 22:17:32.155585   37994 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 22:17:32.155597   37994 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 22:17:32.155602   37994 command_runner.go:130] > # reload'.
	I0717 22:17:32.155613   37994 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 22:17:32.155624   37994 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 22:17:32.155635   37994 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 22:17:32.155649   37994 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 22:17:32.155655   37994 command_runner.go:130] > [crio]
	I0717 22:17:32.155668   37994 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 22:17:32.155680   37994 command_runner.go:130] > # containers images, in this directory.
	I0717 22:17:32.155691   37994 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 22:17:32.155712   37994 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 22:17:32.155725   37994 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 22:17:32.155735   37994 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 22:17:32.155745   37994 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 22:17:32.155757   37994 command_runner.go:130] > storage_driver = "overlay"
	I0717 22:17:32.155769   37994 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 22:17:32.155782   37994 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 22:17:32.155792   37994 command_runner.go:130] > storage_option = [
	I0717 22:17:32.155803   37994 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 22:17:32.155812   37994 command_runner.go:130] > ]
	I0717 22:17:32.155836   37994 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 22:17:32.155850   37994 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 22:17:32.155861   37994 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 22:17:32.155873   37994 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 22:17:32.155887   37994 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 22:17:32.155897   37994 command_runner.go:130] > # always happen on a node reboot
	I0717 22:17:32.155906   37994 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 22:17:32.155914   37994 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 22:17:32.155923   37994 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 22:17:32.155938   37994 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 22:17:32.155950   37994 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 22:17:32.155967   37994 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 22:17:32.155984   37994 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 22:17:32.155993   37994 command_runner.go:130] > # internal_wipe = true
	I0717 22:17:32.156006   37994 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 22:17:32.156017   37994 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 22:17:32.156026   37994 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 22:17:32.156038   37994 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 22:17:32.156066   37994 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 22:17:32.156075   37994 command_runner.go:130] > [crio.api]
	I0717 22:17:32.156087   37994 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 22:17:32.156098   37994 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 22:17:32.156113   37994 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 22:17:32.156121   37994 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 22:17:32.156130   37994 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 22:17:32.156142   37994 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 22:17:32.156152   37994 command_runner.go:130] > # stream_port = "0"
	I0717 22:17:32.156164   37994 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 22:17:32.156174   37994 command_runner.go:130] > # stream_enable_tls = false
	I0717 22:17:32.156187   37994 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 22:17:32.156197   37994 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 22:17:32.156207   37994 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 22:17:32.156220   37994 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 22:17:32.156230   37994 command_runner.go:130] > # minutes.
	I0717 22:17:32.156238   37994 command_runner.go:130] > # stream_tls_cert = ""
	I0717 22:17:32.156251   37994 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 22:17:32.156265   37994 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 22:17:32.156275   37994 command_runner.go:130] > # stream_tls_key = ""
	I0717 22:17:32.156288   37994 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 22:17:32.156300   37994 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 22:17:32.156309   37994 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 22:17:32.156318   37994 command_runner.go:130] > # stream_tls_ca = ""
	I0717 22:17:32.156335   37994 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:17:32.156346   37994 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 22:17:32.156361   37994 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:17:32.156372   37994 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 22:17:32.156396   37994 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 22:17:32.156405   37994 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 22:17:32.156414   37994 command_runner.go:130] > [crio.runtime]
	I0717 22:17:32.156427   37994 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 22:17:32.156440   37994 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 22:17:32.156449   37994 command_runner.go:130] > # "nofile=1024:2048"
	I0717 22:17:32.156462   37994 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 22:17:32.156472   37994 command_runner.go:130] > # default_ulimits = [
	I0717 22:17:32.156481   37994 command_runner.go:130] > # ]
	I0717 22:17:32.156491   37994 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 22:17:32.156498   37994 command_runner.go:130] > # no_pivot = false
	I0717 22:17:32.156508   37994 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 22:17:32.156529   37994 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 22:17:32.156540   37994 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 22:17:32.156553   37994 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 22:17:32.156564   37994 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 22:17:32.156578   37994 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:17:32.156592   37994 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 22:17:32.156600   37994 command_runner.go:130] > # Cgroup setting for conmon
	I0717 22:17:32.156615   37994 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 22:17:32.156625   37994 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 22:17:32.156639   37994 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 22:17:32.156652   37994 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 22:17:32.156666   37994 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:17:32.156675   37994 command_runner.go:130] > conmon_env = [
	I0717 22:17:32.156688   37994 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 22:17:32.156697   37994 command_runner.go:130] > ]
	I0717 22:17:32.156706   37994 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 22:17:32.156717   37994 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 22:17:32.156730   37994 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 22:17:32.156740   37994 command_runner.go:130] > # default_env = [
	I0717 22:17:32.156749   37994 command_runner.go:130] > # ]
	I0717 22:17:32.156761   37994 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 22:17:32.156771   37994 command_runner.go:130] > # selinux = false
	I0717 22:17:32.156785   37994 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 22:17:32.156795   37994 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 22:17:32.156807   37994 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 22:17:32.156816   37994 command_runner.go:130] > # seccomp_profile = ""
	I0717 22:17:32.156830   37994 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 22:17:32.156843   37994 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 22:17:32.156856   37994 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 22:17:32.156867   37994 command_runner.go:130] > # which might increase security.
	I0717 22:17:32.156877   37994 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 22:17:32.156886   37994 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 22:17:32.156898   37994 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 22:17:32.156911   37994 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 22:17:32.156926   37994 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 22:17:32.156937   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:32.156948   37994 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 22:17:32.156960   37994 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 22:17:32.156971   37994 command_runner.go:130] > # the cgroup blockio controller.
	I0717 22:17:32.156978   37994 command_runner.go:130] > # blockio_config_file = ""
	I0717 22:17:32.156986   37994 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 22:17:32.156996   37994 command_runner.go:130] > # irqbalance daemon.
	I0717 22:17:32.157008   37994 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 22:17:32.157023   37994 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 22:17:32.157034   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:32.157044   37994 command_runner.go:130] > # rdt_config_file = ""
	I0717 22:17:32.157055   37994 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 22:17:32.157065   37994 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 22:17:32.157075   37994 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 22:17:32.157082   37994 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 22:17:32.157098   37994 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 22:17:32.157119   37994 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 22:17:32.157129   37994 command_runner.go:130] > # will be added.
	I0717 22:17:32.157139   37994 command_runner.go:130] > # default_capabilities = [
	I0717 22:17:32.157149   37994 command_runner.go:130] > # 	"CHOWN",
	I0717 22:17:32.157159   37994 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 22:17:32.157168   37994 command_runner.go:130] > # 	"FSETID",
	I0717 22:17:32.157176   37994 command_runner.go:130] > # 	"FOWNER",
	I0717 22:17:32.157180   37994 command_runner.go:130] > # 	"SETGID",
	I0717 22:17:32.157188   37994 command_runner.go:130] > # 	"SETUID",
	I0717 22:17:32.157197   37994 command_runner.go:130] > # 	"SETPCAP",
	I0717 22:17:32.157208   37994 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 22:17:32.157214   37994 command_runner.go:130] > # 	"KILL",
	I0717 22:17:32.157224   37994 command_runner.go:130] > # ]
	I0717 22:17:32.157237   37994 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 22:17:32.157250   37994 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:17:32.157259   37994 command_runner.go:130] > # default_sysctls = [
	I0717 22:17:32.157268   37994 command_runner.go:130] > # ]
	I0717 22:17:32.157277   37994 command_runner.go:130] > # List of devices on the host that a
	I0717 22:17:32.157286   37994 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 22:17:32.157296   37994 command_runner.go:130] > # allowed_devices = [
	I0717 22:17:32.157306   37994 command_runner.go:130] > # 	"/dev/fuse",
	I0717 22:17:32.157312   37994 command_runner.go:130] > # ]
	I0717 22:17:32.157324   37994 command_runner.go:130] > # List of additional devices. specified as
	I0717 22:17:32.157339   37994 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 22:17:32.157350   37994 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 22:17:32.157374   37994 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:17:32.157381   37994 command_runner.go:130] > # additional_devices = [
	I0717 22:17:32.157387   37994 command_runner.go:130] > # ]
	I0717 22:17:32.157399   37994 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 22:17:32.157409   37994 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 22:17:32.157416   37994 command_runner.go:130] > # 	"/etc/cdi",
	I0717 22:17:32.157426   37994 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 22:17:32.157434   37994 command_runner.go:130] > # ]
	I0717 22:17:32.157447   37994 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 22:17:32.157460   37994 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 22:17:32.157470   37994 command_runner.go:130] > # Defaults to false.
	I0717 22:17:32.157479   37994 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 22:17:32.157489   37994 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 22:17:32.157503   37994 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 22:17:32.157524   37994 command_runner.go:130] > # hooks_dir = [
	I0717 22:17:32.157537   37994 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 22:17:32.157546   37994 command_runner.go:130] > # ]
	I0717 22:17:32.157558   37994 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 22:17:32.157571   37994 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 22:17:32.157584   37994 command_runner.go:130] > # its default mounts from the following two files:
	I0717 22:17:32.157592   37994 command_runner.go:130] > #
	I0717 22:17:32.157600   37994 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 22:17:32.157614   37994 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 22:17:32.157627   37994 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 22:17:32.157636   37994 command_runner.go:130] > #
	I0717 22:17:32.157650   37994 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 22:17:32.157663   37994 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 22:17:32.157676   37994 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 22:17:32.157688   37994 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 22:17:32.157695   37994 command_runner.go:130] > #
	I0717 22:17:32.157700   37994 command_runner.go:130] > # default_mounts_file = ""
	I0717 22:17:32.157712   37994 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 22:17:32.157727   37994 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 22:17:32.157738   37994 command_runner.go:130] > pids_limit = 1024
	I0717 22:17:32.157751   37994 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 22:17:32.157764   37994 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 22:17:32.157778   37994 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 22:17:32.157791   37994 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 22:17:32.157798   37994 command_runner.go:130] > # log_size_max = -1
	I0717 22:17:32.157810   37994 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 22:17:32.157821   37994 command_runner.go:130] > # log_to_journald = false
	I0717 22:17:32.157832   37994 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 22:17:32.157843   37994 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 22:17:32.157854   37994 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 22:17:32.157866   37994 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 22:17:32.157879   37994 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 22:17:32.157893   37994 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 22:17:32.157902   37994 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 22:17:32.157912   37994 command_runner.go:130] > # read_only = false
	I0717 22:17:32.157925   37994 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 22:17:32.157940   37994 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 22:17:32.157951   37994 command_runner.go:130] > # live configuration reload.
	I0717 22:17:32.157961   37994 command_runner.go:130] > # log_level = "info"
	I0717 22:17:32.157971   37994 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 22:17:32.157983   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:32.157991   37994 command_runner.go:130] > # log_filter = ""
	I0717 22:17:32.158001   37994 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 22:17:32.158014   37994 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 22:17:32.158025   37994 command_runner.go:130] > # separated by comma.
	I0717 22:17:32.158031   37994 command_runner.go:130] > # uid_mappings = ""
	I0717 22:17:32.158045   37994 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 22:17:32.158058   37994 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 22:17:32.158068   37994 command_runner.go:130] > # separated by comma.
	I0717 22:17:32.158078   37994 command_runner.go:130] > # gid_mappings = ""
	I0717 22:17:32.158090   37994 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 22:17:32.158099   37994 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:17:32.158113   37994 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:17:32.158125   37994 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 22:17:32.158139   37994 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 22:17:32.158153   37994 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:17:32.158166   37994 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:17:32.158176   37994 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 22:17:32.158188   37994 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 22:17:32.158197   37994 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 22:17:32.158209   37994 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 22:17:32.158219   37994 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 22:17:32.158229   37994 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 22:17:32.158243   37994 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 22:17:32.158254   37994 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 22:17:32.158265   37994 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 22:17:32.158278   37994 command_runner.go:130] > drop_infra_ctr = false
	I0717 22:17:32.158288   37994 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 22:17:32.158299   37994 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 22:17:32.158315   37994 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 22:17:32.158325   37994 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 22:17:32.158339   37994 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 22:17:32.158351   37994 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 22:17:32.158361   37994 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 22:17:32.158373   37994 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 22:17:32.158381   37994 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 22:17:32.158391   37994 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 22:17:32.158405   37994 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 22:17:32.158416   37994 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 22:17:32.158427   37994 command_runner.go:130] > # default_runtime = "runc"
	I0717 22:17:32.158438   37994 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 22:17:32.158453   37994 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 22:17:32.158552   37994 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 22:17:32.158576   37994 command_runner.go:130] > # creation as a file is not desired either.
	I0717 22:17:32.158588   37994 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 22:17:32.158601   37994 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 22:17:32.158611   37994 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 22:17:32.158620   37994 command_runner.go:130] > # ]
	I0717 22:17:32.158634   37994 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 22:17:32.158648   37994 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 22:17:32.158663   37994 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 22:17:32.158676   37994 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 22:17:32.158683   37994 command_runner.go:130] > #
	I0717 22:17:32.158690   37994 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 22:17:32.158701   37994 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 22:17:32.158712   37994 command_runner.go:130] > #  runtime_type = "oci"
	I0717 22:17:32.158723   37994 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 22:17:32.158735   37994 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 22:17:32.158746   37994 command_runner.go:130] > #  allowed_annotations = []
	I0717 22:17:32.158755   37994 command_runner.go:130] > # Where:
	I0717 22:17:32.158767   37994 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 22:17:32.158778   37994 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 22:17:32.158790   37994 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 22:17:32.158804   37994 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 22:17:32.158818   37994 command_runner.go:130] > #   in $PATH.
	I0717 22:17:32.158831   37994 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 22:17:32.158842   37994 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 22:17:32.158856   37994 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 22:17:32.158864   37994 command_runner.go:130] > #   state.
	I0717 22:17:32.158876   37994 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 22:17:32.158890   37994 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 22:17:32.158907   37994 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 22:17:32.158919   37994 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 22:17:32.158933   37994 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 22:17:32.158947   37994 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 22:17:32.158955   37994 command_runner.go:130] > #   The currently recognized values are:
	I0717 22:17:32.158969   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 22:17:32.158985   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 22:17:32.158999   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 22:17:32.159012   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 22:17:32.159028   37994 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 22:17:32.159041   37994 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 22:17:32.159052   37994 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 22:17:32.159065   37994 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 22:17:32.159077   37994 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 22:17:32.159094   37994 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 22:17:32.159104   37994 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 22:17:32.159114   37994 command_runner.go:130] > runtime_type = "oci"
	I0717 22:17:32.159123   37994 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 22:17:32.159133   37994 command_runner.go:130] > runtime_config_path = ""
	I0717 22:17:32.159142   37994 command_runner.go:130] > monitor_path = ""
	I0717 22:17:32.159150   37994 command_runner.go:130] > monitor_cgroup = ""
	I0717 22:17:32.159160   37994 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 22:17:32.159176   37994 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 22:17:32.159186   37994 command_runner.go:130] > # running containers
	I0717 22:17:32.159196   37994 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 22:17:32.159210   37994 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 22:17:32.159252   37994 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 22:17:32.159266   37994 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 22:17:32.159278   37994 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 22:17:32.159290   37994 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 22:17:32.159300   37994 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 22:17:32.159311   37994 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 22:17:32.159321   37994 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 22:17:32.159328   37994 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 22:17:32.159336   37994 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 22:17:32.159348   37994 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 22:17:32.159363   37994 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 22:17:32.159379   37994 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 22:17:32.159395   37994 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 22:17:32.159407   37994 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 22:17:32.159424   37994 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 22:17:32.159437   37994 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 22:17:32.159450   37994 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 22:17:32.159466   37994 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 22:17:32.159475   37994 command_runner.go:130] > # Example:
	I0717 22:17:32.159486   37994 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 22:17:32.159497   37994 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 22:17:32.159508   37994 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 22:17:32.159519   37994 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 22:17:32.159527   37994 command_runner.go:130] > # cpuset = 0
	I0717 22:17:32.159534   37994 command_runner.go:130] > # cpushares = "0-1"
	I0717 22:17:32.159539   37994 command_runner.go:130] > # Where:
	I0717 22:17:32.159551   37994 command_runner.go:130] > # The workload name is workload-type.
	I0717 22:17:32.159566   37994 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 22:17:32.159579   37994 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 22:17:32.159592   37994 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 22:17:32.159607   37994 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 22:17:32.159619   37994 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 22:17:32.159627   37994 command_runner.go:130] > # 
	I0717 22:17:32.159637   37994 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 22:17:32.159645   37994 command_runner.go:130] > #
	I0717 22:17:32.159659   37994 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 22:17:32.159673   37994 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 22:17:32.159687   37994 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 22:17:32.159700   37994 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 22:17:32.159714   37994 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 22:17:32.159723   37994 command_runner.go:130] > [crio.image]
	I0717 22:17:32.159731   37994 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 22:17:32.159742   37994 command_runner.go:130] > # default_transport = "docker://"
	I0717 22:17:32.159757   37994 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 22:17:32.159771   37994 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:17:32.159781   37994 command_runner.go:130] > # global_auth_file = ""
	I0717 22:17:32.159792   37994 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 22:17:32.159804   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:32.159814   37994 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 22:17:32.159825   37994 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 22:17:32.159837   37994 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:17:32.159849   37994 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:32.159859   37994 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 22:17:32.159871   37994 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 22:17:32.159883   37994 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 22:17:32.159895   37994 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 22:17:32.159908   37994 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 22:17:32.159917   37994 command_runner.go:130] > # pause_command = "/pause"
	I0717 22:17:32.159929   37994 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 22:17:32.159942   37994 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 22:17:32.159954   37994 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 22:17:32.159967   37994 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 22:17:32.159978   37994 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 22:17:32.159988   37994 command_runner.go:130] > # signature_policy = ""
	I0717 22:17:32.160000   37994 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 22:17:32.160011   37994 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 22:17:32.160021   37994 command_runner.go:130] > # changing them here.
	I0717 22:17:32.160030   37994 command_runner.go:130] > # insecure_registries = [
	I0717 22:17:32.160039   37994 command_runner.go:130] > # ]
	I0717 22:17:32.160056   37994 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 22:17:32.160068   37994 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 22:17:32.160078   37994 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 22:17:32.160097   37994 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 22:17:32.160108   37994 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 22:17:32.160119   37994 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 22:17:32.160128   37994 command_runner.go:130] > # CNI plugins.
	I0717 22:17:32.160134   37994 command_runner.go:130] > [crio.network]
	I0717 22:17:32.160140   37994 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 22:17:32.160148   37994 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 22:17:32.160155   37994 command_runner.go:130] > # cni_default_network = ""
	I0717 22:17:32.160161   37994 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 22:17:32.160167   37994 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 22:17:32.160173   37994 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 22:17:32.160179   37994 command_runner.go:130] > # plugin_dirs = [
	I0717 22:17:32.160183   37994 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 22:17:32.160189   37994 command_runner.go:130] > # ]
	I0717 22:17:32.160195   37994 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 22:17:32.160201   37994 command_runner.go:130] > [crio.metrics]
	I0717 22:17:32.160206   37994 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 22:17:32.160212   37994 command_runner.go:130] > enable_metrics = true
	I0717 22:17:32.160217   37994 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 22:17:32.160224   37994 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 22:17:32.160230   37994 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 22:17:32.160238   37994 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 22:17:32.160246   37994 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 22:17:32.160252   37994 command_runner.go:130] > # metrics_collectors = [
	I0717 22:17:32.160256   37994 command_runner.go:130] > # 	"operations",
	I0717 22:17:32.160263   37994 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 22:17:32.160268   37994 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 22:17:32.160274   37994 command_runner.go:130] > # 	"operations_errors",
	I0717 22:17:32.160278   37994 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 22:17:32.160282   37994 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 22:17:32.160289   37994 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 22:17:32.160293   37994 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 22:17:32.160300   37994 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 22:17:32.160304   37994 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 22:17:32.160309   37994 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 22:17:32.160313   37994 command_runner.go:130] > # 	"containers_oom_total",
	I0717 22:17:32.160319   37994 command_runner.go:130] > # 	"containers_oom",
	I0717 22:17:32.160323   37994 command_runner.go:130] > # 	"processes_defunct",
	I0717 22:17:32.160329   37994 command_runner.go:130] > # 	"operations_total",
	I0717 22:17:32.160334   37994 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 22:17:32.160341   37994 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 22:17:32.160345   37994 command_runner.go:130] > # 	"operations_errors_total",
	I0717 22:17:32.160352   37994 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 22:17:32.160356   37994 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 22:17:32.160363   37994 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 22:17:32.160367   37994 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 22:17:32.160374   37994 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 22:17:32.160380   37994 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 22:17:32.160386   37994 command_runner.go:130] > # ]
	I0717 22:17:32.160391   37994 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 22:17:32.160397   37994 command_runner.go:130] > # metrics_port = 9090
	I0717 22:17:32.160402   37994 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 22:17:32.160408   37994 command_runner.go:130] > # metrics_socket = ""
	I0717 22:17:32.160413   37994 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 22:17:32.160421   37994 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 22:17:32.160429   37994 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 22:17:32.160436   37994 command_runner.go:130] > # certificate on any modification event.
	I0717 22:17:32.160440   37994 command_runner.go:130] > # metrics_cert = ""
	I0717 22:17:32.160447   37994 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 22:17:32.160452   37994 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 22:17:32.160458   37994 command_runner.go:130] > # metrics_key = ""
	I0717 22:17:32.160464   37994 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 22:17:32.160469   37994 command_runner.go:130] > [crio.tracing]
	I0717 22:17:32.160475   37994 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 22:17:32.160482   37994 command_runner.go:130] > # enable_tracing = false
	I0717 22:17:32.160487   37994 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 22:17:32.160494   37994 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 22:17:32.160501   37994 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 22:17:32.160508   37994 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 22:17:32.160515   37994 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 22:17:32.160521   37994 command_runner.go:130] > [crio.stats]
	I0717 22:17:32.160527   37994 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 22:17:32.160534   37994 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 22:17:32.160538   37994 command_runner.go:130] > # stats_collection_period = 0
	I0717 22:17:32.160596   37994 cni.go:84] Creating CNI manager for ""
	I0717 22:17:32.160606   37994 cni.go:137] 3 nodes found, recommending kindnet
	I0717 22:17:32.160614   37994 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:17:32.160630   37994 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-009530 NodeName:multinode-009530-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:17:32.160732   37994 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-009530-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:17:32.160776   37994 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-009530-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-009530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:17:32.160823   37994 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:17:32.169763   37994 command_runner.go:130] > kubeadm
	I0717 22:17:32.169779   37994 command_runner.go:130] > kubectl
	I0717 22:17:32.169785   37994 command_runner.go:130] > kubelet
	I0717 22:17:32.169960   37994 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:17:32.170013   37994 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 22:17:32.178278   37994 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0717 22:17:32.194602   37994 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:17:32.210692   37994 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I0717 22:17:32.214360   37994 command_runner.go:130] > 192.168.39.222	control-plane.minikube.internal
	I0717 22:17:32.214579   37994 host.go:66] Checking if "multinode-009530" exists ...
	I0717 22:17:32.214834   37994 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:17:32.214984   37994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:17:32.215026   37994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:17:32.230645   37994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0717 22:17:32.231030   37994 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:17:32.231475   37994 main.go:141] libmachine: Using API Version  1
	I0717 22:17:32.231498   37994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:17:32.231847   37994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:17:32.232035   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:17:32.232215   37994 start.go:301] JoinCluster: &{Name:multinode-009530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-009530 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.146 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.205 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
}
	I0717 22:17:32.232324   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 22:17:32.232339   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:17:32.235070   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:17:32.235498   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:17:32.235529   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:17:32.235710   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:17:32.235894   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:17:32.236077   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:17:32.236219   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:17:32.404666   37994 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0wmt77.587c4culz9p996xy --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:17:32.404766   37994 start.go:314] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.205 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0717 22:17:32.404803   37994 host.go:66] Checking if "multinode-009530" exists ...
	I0717 22:17:32.405093   37994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:17:32.405145   37994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:17:32.419444   37994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41447
	I0717 22:17:32.419854   37994 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:17:32.420264   37994 main.go:141] libmachine: Using API Version  1
	I0717 22:17:32.420283   37994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:17:32.420607   37994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:17:32.420784   37994 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:17:32.420960   37994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-009530-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0717 22:17:32.420984   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:17:32.423689   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:17:32.424068   37994 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:17:32.424092   37994 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:17:32.424221   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:17:32.424381   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:17:32.424526   37994 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:17:32.424728   37994 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:17:32.615024   37994 command_runner.go:130] > node/multinode-009530-m03 cordoned
	I0717 22:17:35.651534   37994 command_runner.go:130] > pod "busybox-67b7f59bb-zfwm6" has DeletionTimestamp older than 1 seconds, skipping
	I0717 22:17:35.651556   37994 command_runner.go:130] > node/multinode-009530-m03 drained
	I0717 22:17:35.653319   37994 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0717 22:17:35.653349   37994 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-zldcf, kube-system/kube-proxy-jv9h4
	I0717 22:17:35.653373   37994 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-009530-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.232388106s)
	I0717 22:17:35.653395   37994 node.go:108] successfully drained node "m03"
	I0717 22:17:35.653793   37994 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:17:35.654055   37994 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:17:35.654309   37994 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0717 22:17:35.654348   37994 round_trippers.go:463] DELETE https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m03
	I0717 22:17:35.654354   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:35.654362   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:35.654370   37994 round_trippers.go:473]     Content-Type: application/json
	I0717 22:17:35.654377   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:35.667432   37994 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0717 22:17:35.667455   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:35.667463   37994 round_trippers.go:580]     Audit-Id: 41b4cd0e-22e7-4b1c-ba2d-72f4428fd98e
	I0717 22:17:35.667468   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:35.667474   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:35.667479   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:35.667484   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:35.667489   37994 round_trippers.go:580]     Content-Length: 171
	I0717 22:17:35.667496   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:35 GMT
	I0717 22:17:35.667844   37994 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-009530-m03","kind":"nodes","uid":"cadf8157-0bcb-4971-8496-da993f9c43bf"}}
	I0717 22:17:35.667895   37994 node.go:124] successfully deleted node "m03"
	I0717 22:17:35.667912   37994 start.go:318] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.205 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0717 22:17:35.667936   37994 start.go:322] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.205 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0717 22:17:35.667961   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0wmt77.587c4culz9p996xy --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-009530-m03"
	I0717 22:17:35.727431   37994 command_runner.go:130] ! W0717 22:17:35.719404    2309 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0717 22:17:35.727535   37994 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0717 22:17:35.862510   37994 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0717 22:17:35.862539   37994 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0717 22:17:36.604599   37994 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 22:17:36.604628   37994 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0717 22:17:36.604643   37994 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0717 22:17:36.604664   37994 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:17:36.604676   37994 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:17:36.604684   37994 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 22:17:36.604695   37994 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0717 22:17:36.604707   37994 command_runner.go:130] > This node has joined the cluster:
	I0717 22:17:36.604755   37994 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0717 22:17:36.604774   37994 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0717 22:17:36.604785   37994 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0717 22:17:36.604968   37994 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 22:17:36.887100   37994 start.go:303] JoinCluster complete in 4.654882354s
	I0717 22:17:36.887123   37994 cni.go:84] Creating CNI manager for ""
	I0717 22:17:36.887128   37994 cni.go:137] 3 nodes found, recommending kindnet
	I0717 22:17:36.887180   37994 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 22:17:36.892833   37994 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 22:17:36.892861   37994 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0717 22:17:36.892871   37994 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0717 22:17:36.892877   37994 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:17:36.892883   37994 command_runner.go:130] > Access: 2023-07-17 22:13:32.496064079 +0000
	I0717 22:17:36.892889   37994 command_runner.go:130] > Modify: 2023-07-15 02:34:28.000000000 +0000
	I0717 22:17:36.892897   37994 command_runner.go:130] > Change: 2023-07-17 22:13:30.473064079 +0000
	I0717 22:17:36.892902   37994 command_runner.go:130] >  Birth: -
	I0717 22:17:36.892950   37994 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 22:17:36.892962   37994 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 22:17:36.912172   37994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 22:17:37.418820   37994 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 22:17:37.426524   37994 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 22:17:37.430729   37994 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 22:17:37.448594   37994 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 22:17:37.452056   37994 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:17:37.452412   37994 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:17:37.452821   37994 round_trippers.go:463] GET https://192.168.39.222:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 22:17:37.452836   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.452849   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.452868   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.455527   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:37.455548   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.455558   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.455567   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.455576   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.455603   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.455616   37994 round_trippers.go:580]     Content-Length: 291
	I0717 22:17:37.455624   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.455633   37994 round_trippers.go:580]     Audit-Id: 7ed2f629-f3c0-4819-afe3-f3a69e0ce629
	I0717 22:17:37.455663   37994 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c60c6831-559f-4b19-8b15-656b8972a35c","resourceVersion":"882","creationTimestamp":"2023-07-17T22:03:52Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0717 22:17:37.455774   37994 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-009530" context rescaled to 1 replicas
	I0717 22:17:37.455809   37994 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.205 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0717 22:17:37.458068   37994 out.go:177] * Verifying Kubernetes components...
	I0717 22:17:37.459644   37994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:17:37.473739   37994 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:17:37.473958   37994 kapi.go:59] client config for multinode-009530: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/profiles/multinode-009530/client.key", CAFile:"/home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:17:37.474251   37994 node_ready.go:35] waiting up to 6m0s for node "multinode-009530-m03" to be "Ready" ...
	I0717 22:17:37.474335   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m03
	I0717 22:17:37.474347   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.474358   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.474371   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.476947   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:37.476963   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.476969   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.476976   37994 round_trippers.go:580]     Audit-Id: 2b39b199-62f4-48ba-a69a-ac0b28e21592
	I0717 22:17:37.476984   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.476993   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.477000   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.477008   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.477356   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m03","uid":"5d0163ce-7a8f-400b-a4d7-e0aa9fdf4d5f","resourceVersion":"1188","creationTimestamp":"2023-07-17T22:17:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:36Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0717 22:17:37.477681   37994 node_ready.go:49] node "multinode-009530-m03" has status "Ready":"True"
	I0717 22:17:37.477698   37994 node_ready.go:38] duration metric: took 3.425897ms waiting for node "multinode-009530-m03" to be "Ready" ...
	I0717 22:17:37.477705   37994 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:17:37.477770   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I0717 22:17:37.477780   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.477792   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.477804   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.482094   37994 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:17:37.482111   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.482122   37994 round_trippers.go:580]     Audit-Id: aa56ad84-6292-4119-b454-67732edc9011
	I0717 22:17:37.482130   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.482138   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.482147   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.482154   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.482164   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.483549   37994 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1195"},"items":[{"metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"866","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82090 chars]
	I0717 22:17:37.486712   37994 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:37.486782   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-z4fr8
	I0717 22:17:37.486796   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.486807   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.486820   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.489343   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:37.489357   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.489364   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.489369   37994 round_trippers.go:580]     Audit-Id: 75f94378-f9d6-4c4c-9d6e-e559598934c9
	I0717 22:17:37.489375   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.489383   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.489391   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.489400   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.489643   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-z4fr8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"1fb1d992-a7b6-4259-ba61-dc4092c65c44","resourceVersion":"866","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"a6163f77-c2a4-4a3f-a656-fb3401fc7602","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6163f77-c2a4-4a3f-a656-fb3401fc7602\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0717 22:17:37.490076   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:17:37.490090   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.490097   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.490104   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.492626   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:37.492639   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.492645   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.492652   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.492661   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.492669   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.492693   37994 round_trippers.go:580]     Audit-Id: 7b5733b8-9a42-4be4-bbda-8ba81cf61a12
	I0717 22:17:37.492699   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.492856   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 22:17:37.493266   37994 pod_ready.go:92] pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:37.493284   37994 pod_ready.go:81] duration metric: took 6.550226ms waiting for pod "coredns-5d78c9869d-z4fr8" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:37.493296   37994 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:37.493355   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-009530
	I0717 22:17:37.493365   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.493377   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.493392   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.496481   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:37.496504   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.496514   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.496523   37994 round_trippers.go:580]     Audit-Id: 4f2498ff-9046-4772-a285-a49483451e1e
	I0717 22:17:37.496531   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.496539   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.496547   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.496556   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.496721   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-009530","namespace":"kube-system","uid":"aed75ad9-0156-4275-8a41-b68d09c15660","resourceVersion":"857","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.222:2379","kubernetes.io/config.hash":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.mirror":"ab77d0bbc5cf528d40fb1d6635b3acda","kubernetes.io/config.seen":"2023-07-17T22:03:52.473671520Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0717 22:17:37.497267   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:17:37.497284   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.497297   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.497314   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.502364   37994 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 22:17:37.502387   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.502395   37994 round_trippers.go:580]     Audit-Id: 0384bbfa-1a49-4c5c-b50e-bc80fa1eb3b4
	I0717 22:17:37.502401   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.502406   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.502413   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.502422   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.502430   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.502780   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 22:17:37.503232   37994 pod_ready.go:92] pod "etcd-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:37.503250   37994 pod_ready.go:81] duration metric: took 9.944741ms waiting for pod "etcd-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:37.503271   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:37.503338   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-009530
	I0717 22:17:37.503343   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.503354   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.503364   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.507494   37994 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 22:17:37.507514   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.507524   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.507533   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.507541   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.507551   37994 round_trippers.go:580]     Audit-Id: 2c9f7e9a-617b-4894-9089-52163c3fe44e
	I0717 22:17:37.507563   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.507571   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.507713   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-009530","namespace":"kube-system","uid":"958b1550-f15f-49f3-acf3-dbab69f82fb8","resourceVersion":"856","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.222:8443","kubernetes.io/config.hash":"49e7615bd1aa66d6e32161e120c48180","kubernetes.io/config.mirror":"49e7615bd1aa66d6e32161e120c48180","kubernetes.io/config.seen":"2023-07-17T22:03:52.473675304Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0717 22:17:37.508266   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:17:37.508283   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.508290   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.508296   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.510816   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:37.510838   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.510849   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.510857   37994 round_trippers.go:580]     Audit-Id: a6d713e1-4de3-4708-9b76-7178e348df9b
	I0717 22:17:37.510866   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.510874   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.510889   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.510902   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.511181   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 22:17:37.511602   37994 pod_ready.go:92] pod "kube-apiserver-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:37.511621   37994 pod_ready.go:81] duration metric: took 8.34294ms waiting for pod "kube-apiserver-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:37.511635   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:37.511707   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-009530
	I0717 22:17:37.511717   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.511728   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.511741   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.514542   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:37.514562   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.514572   37994 round_trippers.go:580]     Audit-Id: 09739499-3ab1-4c22-84c4-7c66d89cda0a
	I0717 22:17:37.514581   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.514589   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.514597   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.514607   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.514617   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.514795   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-009530","namespace":"kube-system","uid":"1c9dba7c-6497-41f0-b751-17988278c710","resourceVersion":"864","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d8b61663949a18745a23bcf487c538f2","kubernetes.io/config.mirror":"d8b61663949a18745a23bcf487c538f2","kubernetes.io/config.seen":"2023-07-17T22:03:52.473676600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0717 22:17:37.515294   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:17:37.515309   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.515319   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.515330   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.518249   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:37.518271   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.518281   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.518290   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.518300   37994 round_trippers.go:580]     Audit-Id: 26163200-3947-4ab3-906b-6f54b1ff8056
	I0717 22:17:37.518314   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.518326   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.518339   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.518559   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 22:17:37.519038   37994 pod_ready.go:92] pod "kube-controller-manager-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:37.519059   37994 pod_ready.go:81] duration metric: took 7.408108ms waiting for pod "kube-controller-manager-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:37.519072   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6rxv8" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:37.674373   37994 request.go:628] Waited for 155.236063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rxv8
	I0717 22:17:37.674428   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rxv8
	I0717 22:17:37.674433   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.674442   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.674448   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.678194   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:37.678211   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.678219   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.678225   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.678233   37994 round_trippers.go:580]     Audit-Id: e03b807f-c703-4411-b0e6-2f4c1fd35567
	I0717 22:17:37.678241   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.678253   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.678266   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.678422   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6rxv8","generateName":"kube-proxy-","namespace":"kube-system","uid":"0d197eb7-b5bd-446a-b2f4-c513c06afcbe","resourceVersion":"1031","creationTimestamp":"2023-07-17T22:04:43Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0717 22:17:37.875239   37994 request.go:628] Waited for 196.358153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:17:37.875312   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m02
	I0717 22:17:37.875319   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:37.875330   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:37.875340   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:37.878537   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:37.878566   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:37.878576   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:37.878584   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:37.878595   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:37 GMT
	I0717 22:17:37.878604   37994 round_trippers.go:580]     Audit-Id: 3f4f62ee-3887-48f9-b125-72d337518782
	I0717 22:17:37.878612   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:37.878624   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:37.878730   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m02","uid":"3aa87aa6-cbc0-42fe-abf1-386887aa827b","resourceVersion":"1013","creationTimestamp":"2023-07-17T22:15:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:15:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:15:55Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0717 22:17:37.879064   37994 pod_ready.go:92] pod "kube-proxy-6rxv8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:37.879087   37994 pod_ready.go:81] duration metric: took 360.007036ms waiting for pod "kube-proxy-6rxv8" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:37.879100   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jv9h4" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:38.074579   37994 request.go:628] Waited for 195.405343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jv9h4
	I0717 22:17:38.074643   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jv9h4
	I0717 22:17:38.074650   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:38.074663   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:38.074677   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:38.077796   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:38.077821   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:38.077833   37994 round_trippers.go:580]     Audit-Id: 86d37850-ba7f-481b-accd-e1706a8e665b
	I0717 22:17:38.077842   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:38.077851   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:38.077860   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:38.077868   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:38.077880   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:38 GMT
	I0717 22:17:38.078255   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jv9h4","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3b140d5-ec70-4ffe-8372-7fb67d0fb0c9","resourceVersion":"1190","creationTimestamp":"2023-07-17T22:05:32Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:05:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0717 22:17:38.275240   37994 request.go:628] Waited for 196.429105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m03
	I0717 22:17:38.275315   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m03
	I0717 22:17:38.275335   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:38.275346   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:38.275360   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:38.278039   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:38.278067   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:38.278078   37994 round_trippers.go:580]     Audit-Id: ae96fc6a-0eee-4a67-adb2-d7f883f9c6e7
	I0717 22:17:38.278087   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:38.278095   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:38.278104   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:38.278112   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:38.278125   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:38 GMT
	I0717 22:17:38.278412   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m03","uid":"5d0163ce-7a8f-400b-a4d7-e0aa9fdf4d5f","resourceVersion":"1188","creationTimestamp":"2023-07-17T22:17:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:36Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0717 22:17:38.779588   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jv9h4
	I0717 22:17:38.779633   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:38.779641   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:38.779647   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:38.787023   37994 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 22:17:38.787046   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:38.787056   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:38 GMT
	I0717 22:17:38.787064   37994 round_trippers.go:580]     Audit-Id: 539397dc-8820-4640-b1d8-5b774fd15b22
	I0717 22:17:38.787072   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:38.787080   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:38.787090   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:38.787100   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:38.787586   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jv9h4","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3b140d5-ec70-4ffe-8372-7fb67d0fb0c9","resourceVersion":"1203","creationTimestamp":"2023-07-17T22:05:32Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:05:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0717 22:17:38.788040   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530-m03
	I0717 22:17:38.788054   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:38.788062   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:38.788068   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:38.791407   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:38.791425   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:38.791432   37994 round_trippers.go:580]     Audit-Id: 65c4ff8f-779f-4b11-ab03-a149bfd9472c
	I0717 22:17:38.791437   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:38.791443   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:38.791448   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:38.791455   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:38.791461   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:38 GMT
	I0717 22:17:38.791585   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530-m03","uid":"5d0163ce-7a8f-400b-a4d7-e0aa9fdf4d5f","resourceVersion":"1188","creationTimestamp":"2023-07-17T22:17:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:36Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0717 22:17:38.791821   37994 pod_ready.go:92] pod "kube-proxy-jv9h4" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:38.791833   37994 pod_ready.go:81] duration metric: took 912.726106ms waiting for pod "kube-proxy-jv9h4" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:38.791842   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m5spw" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:38.875231   37994 request.go:628] Waited for 83.329558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5spw
	I0717 22:17:38.875295   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5spw
	I0717 22:17:38.875302   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:38.875314   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:38.875329   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:38.879007   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:38.879026   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:38.879033   37994 round_trippers.go:580]     Audit-Id: 4f5f6498-2c09-412d-a0ab-fdcdcf722a38
	I0717 22:17:38.879039   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:38.879044   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:38.879050   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:38.879055   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:38.879063   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:38 GMT
	I0717 22:17:38.879515   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m5spw","generateName":"kube-proxy-","namespace":"kube-system","uid":"a4bf0eb3-126a-463e-a670-b4793e1c5ec9","resourceVersion":"825","creationTimestamp":"2023-07-17T22:04:05Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bcb9f55e-4db6-4370-ad50-de72169bcc0c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:04:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bcb9f55e-4db6-4370-ad50-de72169bcc0c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 22:17:39.075410   37994 request.go:628] Waited for 195.410531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:17:39.075478   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:17:39.075483   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:39.075491   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:39.075496   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:39.078331   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:39.078356   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:39.078365   37994 round_trippers.go:580]     Audit-Id: 69c9723a-fd2b-4550-88ed-7c40d0055cc4
	I0717 22:17:39.078373   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:39.078382   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:39.078390   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:39.078398   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:39.078407   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:39 GMT
	I0717 22:17:39.078577   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 22:17:39.078985   37994 pod_ready.go:92] pod "kube-proxy-m5spw" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:39.079007   37994 pod_ready.go:81] duration metric: took 287.159094ms waiting for pod "kube-proxy-m5spw" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:39.079018   37994 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:39.274392   37994 request.go:628] Waited for 195.307864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-009530
	I0717 22:17:39.274486   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-009530
	I0717 22:17:39.274492   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:39.274500   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:39.274508   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:39.278333   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:39.278355   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:39.278361   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:39.278368   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:39.278373   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:39.278378   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:39 GMT
	I0717 22:17:39.278384   37994 round_trippers.go:580]     Audit-Id: a716a197-54aa-4356-96c7-4c9bf133f2f2
	I0717 22:17:39.278389   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:39.278742   37994 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-009530","namespace":"kube-system","uid":"5da85194-923d-40f6-ab44-86209b1f057d","resourceVersion":"859","creationTimestamp":"2023-07-17T22:03:52Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"036d300e0ec7bf28a26e0c644008bbd5","kubernetes.io/config.mirror":"036d300e0ec7bf28a26e0c644008bbd5","kubernetes.io/config.seen":"2023-07-17T22:03:52.473677561Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:03:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0717 22:17:39.474525   37994 request.go:628] Waited for 195.318893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:17:39.474571   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-009530
	I0717 22:17:39.474575   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:39.474583   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:39.474590   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:39.478323   37994 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:39.478347   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:39.478357   37994 round_trippers.go:580]     Audit-Id: 57e9ef48-4bc7-4697-b3ee-f2301d7665ab
	I0717 22:17:39.478364   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:39.478372   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:39.478379   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:39.478388   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:39.478398   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:39 GMT
	I0717 22:17:39.478608   37994 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:03:48Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0717 22:17:39.478933   37994 pod_ready.go:92] pod "kube-scheduler-multinode-009530" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:39.478950   37994 pod_ready.go:81] duration metric: took 399.924901ms waiting for pod "kube-scheduler-multinode-009530" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:39.478961   37994 pod_ready.go:38] duration metric: took 2.001246905s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:17:39.478973   37994 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:17:39.479015   37994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:17:39.492302   37994 system_svc.go:56] duration metric: took 13.322363ms WaitForService to wait for kubelet.
	I0717 22:17:39.492325   37994 kubeadm.go:581] duration metric: took 2.036487739s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:17:39.492354   37994 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:17:39.674671   37994 request.go:628] Waited for 182.242542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes
	I0717 22:17:39.674729   37994 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes
	I0717 22:17:39.674734   37994 round_trippers.go:469] Request Headers:
	I0717 22:17:39.674742   37994 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:39.674748   37994 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:39.677721   37994 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:39.677745   37994 round_trippers.go:577] Response Headers:
	I0717 22:17:39.677755   37994 round_trippers.go:580]     Audit-Id: 4a5acab0-299d-476c-bdf8-93fe2409806b
	I0717 22:17:39.677764   37994 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:39.677774   37994 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:39.677787   37994 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a74f736-eadf-4234-a06c-8ef8c2883694
	I0717 22:17:39.677797   37994 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4ccbf0e9-41c2-45a0-bd90-645b686d5089
	I0717 22:17:39.677814   37994 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:39 GMT
	I0717 22:17:39.678240   37994 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"multinode-009530","uid":"a1bfe947-94f9-41d5-925d-e28e90766065","resourceVersion":"890","creationTimestamp":"2023-07-17T22:03:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-009530","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-009530","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_03_53_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15135 chars]
	I0717 22:17:39.678917   37994 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:17:39.678937   37994 node_conditions.go:123] node cpu capacity is 2
	I0717 22:17:39.678945   37994 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:17:39.678949   37994 node_conditions.go:123] node cpu capacity is 2
	I0717 22:17:39.678953   37994 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:17:39.678956   37994 node_conditions.go:123] node cpu capacity is 2
	I0717 22:17:39.678959   37994 node_conditions.go:105] duration metric: took 186.601214ms to run NodePressure ...
	I0717 22:17:39.678971   37994 start.go:228] waiting for startup goroutines ...
	I0717 22:17:39.678989   37994 start.go:242] writing updated cluster config ...
	I0717 22:17:39.679246   37994 ssh_runner.go:195] Run: rm -f paused
	I0717 22:17:39.728395   37994 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 22:17:39.731325   37994 out.go:177] * Done! kubectl is now configured to use "multinode-009530" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 22:13:31 UTC, ends at Mon 2023-07-17 22:17:40 UTC. --
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.673451440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9c3c3be4-2a0f-49d1-9d4d-3018467e8164 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.673735965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:13e45ae02fd5437993b59f4cbc90e87dcba01034c2ed312e93771f8bf6605dbf,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689632077985655453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7dfa53d3f6b9b72e9ba04335742a4c9024d4d2bf2dd0be854464989f7a0f10,PodSandboxId:cc1b57f90ae120c6294ddef446286d7f72c9de4feab14d3a10be1a3867f5ade4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689632055394993503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f01e9aa81bc38ecf8c80440fc802c5091c7b471b05f2a3539e295a98878bda,PodSandboxId:4bfa7659fa061be2415d8c0f3b1c4c85c3bc223ac70eda4ff505b1b6b0934824,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689632054268375628,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e023e88b0df5925b1df58e9297423a5c922b86d30c7f9b94d6bdfee5a3139e,PodSandboxId:7cc6d72318a44cf6c24eab948b18e448e36e99064fa0938ae58fdd4ee7c25e04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689632049211376256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da819367706785fe6e31ab060f48f4236429ff2f75dd24db4d3e8e8c51530e4,PodSandboxId:d3e580f9c4f876b2fa9569c5dfe0f0b88e7fd148ddcc0789eebd2903bd1a6a24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689632047374786885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d24ab8eef18227acbdcd9dc750b9554a4aad8ebf8b701d8eaec30709439626,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689632046882504546,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d
573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f472842f94838e4ac7494a9d8ce79419dadeecfc13643528dcdfaf4069ad403f,PodSandboxId:e5f5696eba245088453d497f93fa0175a8a3c7fb5542883785e500e9b75f795f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689632040365411958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f07edfb41016bb157300a6c682757797cd01208d025a1e018c3a94523c82f60,PodSandboxId:916cf313eec7d1ba3bcbc5f89c1d72e348f8240a6740964399311d648c3cd473,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689632040246289267,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.hash
: 6ce2a6aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ec027e7ae509e329d38ea01c30ba1d150bab6a6e9b74428a1ff4d10e66e835,PodSandboxId:4c276e41bd1bb3c6b5b48fff8454f0d8efcec9bcab2750a8e323b8e124378aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689632039964849675,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7cf8c3ddacdfb41993478ac6d006a40f225edf27e892ef3abb11ca187031c21,PodSandboxId:02b64e405ec2f1e30b4c5076af600bc0befe5666c6ec033c2b33c3cfcb3e7a05,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689632039746546344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2f313b3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9c3c3be4-2a0f-49d1-9d4d-3018467e8164 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.716773007Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1a73f3ab-304c-4761-82f5-a0282b37c3fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.716844261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1a73f3ab-304c-4761-82f5-a0282b37c3fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.717051174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:13e45ae02fd5437993b59f4cbc90e87dcba01034c2ed312e93771f8bf6605dbf,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689632077985655453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7dfa53d3f6b9b72e9ba04335742a4c9024d4d2bf2dd0be854464989f7a0f10,PodSandboxId:cc1b57f90ae120c6294ddef446286d7f72c9de4feab14d3a10be1a3867f5ade4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689632055394993503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f01e9aa81bc38ecf8c80440fc802c5091c7b471b05f2a3539e295a98878bda,PodSandboxId:4bfa7659fa061be2415d8c0f3b1c4c85c3bc223ac70eda4ff505b1b6b0934824,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689632054268375628,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e023e88b0df5925b1df58e9297423a5c922b86d30c7f9b94d6bdfee5a3139e,PodSandboxId:7cc6d72318a44cf6c24eab948b18e448e36e99064fa0938ae58fdd4ee7c25e04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689632049211376256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da819367706785fe6e31ab060f48f4236429ff2f75dd24db4d3e8e8c51530e4,PodSandboxId:d3e580f9c4f876b2fa9569c5dfe0f0b88e7fd148ddcc0789eebd2903bd1a6a24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689632047374786885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d24ab8eef18227acbdcd9dc750b9554a4aad8ebf8b701d8eaec30709439626,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689632046882504546,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d
573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f472842f94838e4ac7494a9d8ce79419dadeecfc13643528dcdfaf4069ad403f,PodSandboxId:e5f5696eba245088453d497f93fa0175a8a3c7fb5542883785e500e9b75f795f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689632040365411958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f07edfb41016bb157300a6c682757797cd01208d025a1e018c3a94523c82f60,PodSandboxId:916cf313eec7d1ba3bcbc5f89c1d72e348f8240a6740964399311d648c3cd473,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689632040246289267,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.hash
: 6ce2a6aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ec027e7ae509e329d38ea01c30ba1d150bab6a6e9b74428a1ff4d10e66e835,PodSandboxId:4c276e41bd1bb3c6b5b48fff8454f0d8efcec9bcab2750a8e323b8e124378aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689632039964849675,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7cf8c3ddacdfb41993478ac6d006a40f225edf27e892ef3abb11ca187031c21,PodSandboxId:02b64e405ec2f1e30b4c5076af600bc0befe5666c6ec033c2b33c3cfcb3e7a05,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689632039746546344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2f313b3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1a73f3ab-304c-4761-82f5-a0282b37c3fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.759056040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9979c04d-d56a-4621-a8e0-1ae3e41fb525 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.759207452Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9979c04d-d56a-4621-a8e0-1ae3e41fb525 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.759412521Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:13e45ae02fd5437993b59f4cbc90e87dcba01034c2ed312e93771f8bf6605dbf,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689632077985655453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7dfa53d3f6b9b72e9ba04335742a4c9024d4d2bf2dd0be854464989f7a0f10,PodSandboxId:cc1b57f90ae120c6294ddef446286d7f72c9de4feab14d3a10be1a3867f5ade4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689632055394993503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f01e9aa81bc38ecf8c80440fc802c5091c7b471b05f2a3539e295a98878bda,PodSandboxId:4bfa7659fa061be2415d8c0f3b1c4c85c3bc223ac70eda4ff505b1b6b0934824,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689632054268375628,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e023e88b0df5925b1df58e9297423a5c922b86d30c7f9b94d6bdfee5a3139e,PodSandboxId:7cc6d72318a44cf6c24eab948b18e448e36e99064fa0938ae58fdd4ee7c25e04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689632049211376256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da819367706785fe6e31ab060f48f4236429ff2f75dd24db4d3e8e8c51530e4,PodSandboxId:d3e580f9c4f876b2fa9569c5dfe0f0b88e7fd148ddcc0789eebd2903bd1a6a24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689632047374786885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d24ab8eef18227acbdcd9dc750b9554a4aad8ebf8b701d8eaec30709439626,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689632046882504546,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d
573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f472842f94838e4ac7494a9d8ce79419dadeecfc13643528dcdfaf4069ad403f,PodSandboxId:e5f5696eba245088453d497f93fa0175a8a3c7fb5542883785e500e9b75f795f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689632040365411958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f07edfb41016bb157300a6c682757797cd01208d025a1e018c3a94523c82f60,PodSandboxId:916cf313eec7d1ba3bcbc5f89c1d72e348f8240a6740964399311d648c3cd473,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689632040246289267,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.hash
: 6ce2a6aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ec027e7ae509e329d38ea01c30ba1d150bab6a6e9b74428a1ff4d10e66e835,PodSandboxId:4c276e41bd1bb3c6b5b48fff8454f0d8efcec9bcab2750a8e323b8e124378aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689632039964849675,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7cf8c3ddacdfb41993478ac6d006a40f225edf27e892ef3abb11ca187031c21,PodSandboxId:02b64e405ec2f1e30b4c5076af600bc0befe5666c6ec033c2b33c3cfcb3e7a05,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689632039746546344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2f313b3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9979c04d-d56a-4621-a8e0-1ae3e41fb525 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.775219851Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=97fa5a83-4039-4d07-8256-c644ded53c23 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.775482032Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4bfa7659fa061be2415d8c0f3b1c4c85c3bc223ac70eda4ff505b1b6b0934824,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-z4fr8,Uid:1fb1d992-a7b6-4259-ba61-dc4092c65c44,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689632053618284805,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:14:05.743476083Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cc1b57f90ae120c6294ddef446286d7f72c9de4feab14d3a10be1a3867f5ade4,Metadata:&PodSandboxMetadata{Name:busybox-67b7f59bb-p72ln,Uid:aecc37f7-73f7-490b-9b82-bf330600bf41,Namespace:default,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1689632053614318420,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,pod-template-hash: 67b7f59bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:14:05.743465362Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3e580f9c4f876b2fa9569c5dfe0f0b88e7fd148ddcc0789eebd2903bd1a6a24,Metadata:&PodSandboxMetadata{Name:kube-proxy-m5spw,Uid:a4bf0eb3-126a-463e-a670-b4793e1c5ec9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689632046119191994,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1c5ec9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{k
ubernetes.io/config.seen: 2023-07-17T22:14:05.743479477Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7cc6d72318a44cf6c24eab948b18e448e36e99064fa0938ae58fdd4ee7c25e04,Metadata:&PodSandboxMetadata{Name:kindnet-gh4hn,Uid:d474f5c5-bd94-411b-8d69-b3871c2b5653,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689632046114930558,Labels:map[string]string{app: kindnet,controller-revision-hash: 575d9d6996,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d474f5c5-bd94-411b-8d69-b3871c2b5653,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:14:05.743477323Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d8f48e9c-2b37-4edc-89e4-d032cac0d573,Namespace:kube-system,Attempt:0,},State:
SANDBOX_READY,CreatedAt:1689632046077463897,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\
",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T22:14:05.743480531Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e5f5696eba245088453d497f93fa0175a8a3c7fb5542883785e500e9b75f795f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-009530,Uid:036d300e0ec7bf28a26e0c644008bbd5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689632039296149426,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 036d300e0ec7bf28a26e0c644008bbd5,kubernetes.io/config.seen: 2023-07-17T22:13:58.739844781Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:02b64e405ec2f1e30b4c5076af600bc0befe5666c6ec033c2b33c3cfcb3e7a05,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multin
ode-009530,Uid:49e7615bd1aa66d6e32161e120c48180,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689632039275867021,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.222:8443,kubernetes.io/config.hash: 49e7615bd1aa66d6e32161e120c48180,kubernetes.io/config.seen: 2023-07-17T22:13:58.739842886Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4c276e41bd1bb3c6b5b48fff8454f0d8efcec9bcab2750a8e323b8e124378aa9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-009530,Uid:d8b61663949a18745a23bcf487c538f2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689632039262283554,Labels:map[string]string{component: kube-controller-manager,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d8b61663949a18745a23bcf487c538f2,kubernetes.io/config.seen: 2023-07-17T22:13:58.739844068Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:916cf313eec7d1ba3bcbc5f89c1d72e348f8240a6740964399311d648c3cd473,Metadata:&PodSandboxMetadata{Name:etcd-multinode-009530,Uid:ab77d0bbc5cf528d40fb1d6635b3acda,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689632039228956088,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.222:2379,kubernet
es.io/config.hash: ab77d0bbc5cf528d40fb1d6635b3acda,kubernetes.io/config.seen: 2023-07-17T22:13:58.739838261Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=97fa5a83-4039-4d07-8256-c644ded53c23 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.776331124Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ce440c80-9fb0-477b-a0fa-51ea4b8ffafb name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.776383378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ce440c80-9fb0-477b-a0fa-51ea4b8ffafb name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.776556080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:13e45ae02fd5437993b59f4cbc90e87dcba01034c2ed312e93771f8bf6605dbf,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689632077985655453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7dfa53d3f6b9b72e9ba04335742a4c9024d4d2bf2dd0be854464989f7a0f10,PodSandboxId:cc1b57f90ae120c6294ddef446286d7f72c9de4feab14d3a10be1a3867f5ade4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689632055394993503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f01e9aa81bc38ecf8c80440fc802c5091c7b471b05f2a3539e295a98878bda,PodSandboxId:4bfa7659fa061be2415d8c0f3b1c4c85c3bc223ac70eda4ff505b1b6b0934824,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689632054268375628,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e023e88b0df5925b1df58e9297423a5c922b86d30c7f9b94d6bdfee5a3139e,PodSandboxId:7cc6d72318a44cf6c24eab948b18e448e36e99064fa0938ae58fdd4ee7c25e04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689632049211376256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da819367706785fe6e31ab060f48f4236429ff2f75dd24db4d3e8e8c51530e4,PodSandboxId:d3e580f9c4f876b2fa9569c5dfe0f0b88e7fd148ddcc0789eebd2903bd1a6a24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689632047374786885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f472842f94838e4ac7494a9d8ce79419dadeecfc13643528dcdfaf4069ad403f,PodSandboxId:e5f5696eba245088453d497f93fa0175a8a3c7fb5542883785e500e9b75f795f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689632040365411958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f07edfb41016bb157300a6c682757797cd01208d025a1e018c3a94523c82f60,PodSandboxId:916cf313eec7d1ba3bcbc5f89c1d72e348f8240a6740964399311d648c3cd473,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689632040246289267,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 6ce2a6aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ec027e7ae509e329d38ea01c30ba1d150bab6a6e9b74428a1ff4d10e66e835,PodSandboxId:4c276e41bd1bb3c6b5b48fff8454f0d8efcec9bcab2750a8e323b8e124378aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689632039964849675,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io
.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7cf8c3ddacdfb41993478ac6d006a40f225edf27e892ef3abb11ca187031c21,PodSandboxId:02b64e405ec2f1e30b4c5076af600bc0befe5666c6ec033c2b33c3cfcb3e7a05,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689632039746546344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.
container.hash: 2f313b3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ce440c80-9fb0-477b-a0fa-51ea4b8ffafb name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.795022941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9f17e0a3-1d18-44ba-9543-3d46d58caae9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.795162139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9f17e0a3-1d18-44ba-9543-3d46d58caae9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.795373211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:13e45ae02fd5437993b59f4cbc90e87dcba01034c2ed312e93771f8bf6605dbf,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689632077985655453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7dfa53d3f6b9b72e9ba04335742a4c9024d4d2bf2dd0be854464989f7a0f10,PodSandboxId:cc1b57f90ae120c6294ddef446286d7f72c9de4feab14d3a10be1a3867f5ade4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689632055394993503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f01e9aa81bc38ecf8c80440fc802c5091c7b471b05f2a3539e295a98878bda,PodSandboxId:4bfa7659fa061be2415d8c0f3b1c4c85c3bc223ac70eda4ff505b1b6b0934824,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689632054268375628,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e023e88b0df5925b1df58e9297423a5c922b86d30c7f9b94d6bdfee5a3139e,PodSandboxId:7cc6d72318a44cf6c24eab948b18e448e36e99064fa0938ae58fdd4ee7c25e04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689632049211376256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da819367706785fe6e31ab060f48f4236429ff2f75dd24db4d3e8e8c51530e4,PodSandboxId:d3e580f9c4f876b2fa9569c5dfe0f0b88e7fd148ddcc0789eebd2903bd1a6a24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689632047374786885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d24ab8eef18227acbdcd9dc750b9554a4aad8ebf8b701d8eaec30709439626,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689632046882504546,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d
573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f472842f94838e4ac7494a9d8ce79419dadeecfc13643528dcdfaf4069ad403f,PodSandboxId:e5f5696eba245088453d497f93fa0175a8a3c7fb5542883785e500e9b75f795f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689632040365411958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f07edfb41016bb157300a6c682757797cd01208d025a1e018c3a94523c82f60,PodSandboxId:916cf313eec7d1ba3bcbc5f89c1d72e348f8240a6740964399311d648c3cd473,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689632040246289267,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.hash
: 6ce2a6aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ec027e7ae509e329d38ea01c30ba1d150bab6a6e9b74428a1ff4d10e66e835,PodSandboxId:4c276e41bd1bb3c6b5b48fff8454f0d8efcec9bcab2750a8e323b8e124378aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689632039964849675,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7cf8c3ddacdfb41993478ac6d006a40f225edf27e892ef3abb11ca187031c21,PodSandboxId:02b64e405ec2f1e30b4c5076af600bc0befe5666c6ec033c2b33c3cfcb3e7a05,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689632039746546344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2f313b3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9f17e0a3-1d18-44ba-9543-3d46d58caae9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.828618358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f3d0a6e1-f9d1-4ef3-a811-b26db2fb857c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.828684899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f3d0a6e1-f9d1-4ef3-a811-b26db2fb857c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.828912021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:13e45ae02fd5437993b59f4cbc90e87dcba01034c2ed312e93771f8bf6605dbf,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689632077985655453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7dfa53d3f6b9b72e9ba04335742a4c9024d4d2bf2dd0be854464989f7a0f10,PodSandboxId:cc1b57f90ae120c6294ddef446286d7f72c9de4feab14d3a10be1a3867f5ade4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689632055394993503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f01e9aa81bc38ecf8c80440fc802c5091c7b471b05f2a3539e295a98878bda,PodSandboxId:4bfa7659fa061be2415d8c0f3b1c4c85c3bc223ac70eda4ff505b1b6b0934824,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689632054268375628,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e023e88b0df5925b1df58e9297423a5c922b86d30c7f9b94d6bdfee5a3139e,PodSandboxId:7cc6d72318a44cf6c24eab948b18e448e36e99064fa0938ae58fdd4ee7c25e04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689632049211376256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da819367706785fe6e31ab060f48f4236429ff2f75dd24db4d3e8e8c51530e4,PodSandboxId:d3e580f9c4f876b2fa9569c5dfe0f0b88e7fd148ddcc0789eebd2903bd1a6a24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689632047374786885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d24ab8eef18227acbdcd9dc750b9554a4aad8ebf8b701d8eaec30709439626,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689632046882504546,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d
573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f472842f94838e4ac7494a9d8ce79419dadeecfc13643528dcdfaf4069ad403f,PodSandboxId:e5f5696eba245088453d497f93fa0175a8a3c7fb5542883785e500e9b75f795f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689632040365411958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f07edfb41016bb157300a6c682757797cd01208d025a1e018c3a94523c82f60,PodSandboxId:916cf313eec7d1ba3bcbc5f89c1d72e348f8240a6740964399311d648c3cd473,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689632040246289267,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.hash
: 6ce2a6aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ec027e7ae509e329d38ea01c30ba1d150bab6a6e9b74428a1ff4d10e66e835,PodSandboxId:4c276e41bd1bb3c6b5b48fff8454f0d8efcec9bcab2750a8e323b8e124378aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689632039964849675,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7cf8c3ddacdfb41993478ac6d006a40f225edf27e892ef3abb11ca187031c21,PodSandboxId:02b64e405ec2f1e30b4c5076af600bc0befe5666c6ec033c2b33c3cfcb3e7a05,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689632039746546344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2f313b3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f3d0a6e1-f9d1-4ef3-a811-b26db2fb857c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.872975908Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eab294c6-8154-4bae-9719-d23a32a39255 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.873118147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eab294c6-8154-4bae-9719-d23a32a39255 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.873326602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:13e45ae02fd5437993b59f4cbc90e87dcba01034c2ed312e93771f8bf6605dbf,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689632077985655453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7dfa53d3f6b9b72e9ba04335742a4c9024d4d2bf2dd0be854464989f7a0f10,PodSandboxId:cc1b57f90ae120c6294ddef446286d7f72c9de4feab14d3a10be1a3867f5ade4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689632055394993503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f01e9aa81bc38ecf8c80440fc802c5091c7b471b05f2a3539e295a98878bda,PodSandboxId:4bfa7659fa061be2415d8c0f3b1c4c85c3bc223ac70eda4ff505b1b6b0934824,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689632054268375628,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e023e88b0df5925b1df58e9297423a5c922b86d30c7f9b94d6bdfee5a3139e,PodSandboxId:7cc6d72318a44cf6c24eab948b18e448e36e99064fa0938ae58fdd4ee7c25e04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689632049211376256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da819367706785fe6e31ab060f48f4236429ff2f75dd24db4d3e8e8c51530e4,PodSandboxId:d3e580f9c4f876b2fa9569c5dfe0f0b88e7fd148ddcc0789eebd2903bd1a6a24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689632047374786885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d24ab8eef18227acbdcd9dc750b9554a4aad8ebf8b701d8eaec30709439626,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689632046882504546,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d
573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f472842f94838e4ac7494a9d8ce79419dadeecfc13643528dcdfaf4069ad403f,PodSandboxId:e5f5696eba245088453d497f93fa0175a8a3c7fb5542883785e500e9b75f795f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689632040365411958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f07edfb41016bb157300a6c682757797cd01208d025a1e018c3a94523c82f60,PodSandboxId:916cf313eec7d1ba3bcbc5f89c1d72e348f8240a6740964399311d648c3cd473,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689632040246289267,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.hash
: 6ce2a6aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ec027e7ae509e329d38ea01c30ba1d150bab6a6e9b74428a1ff4d10e66e835,PodSandboxId:4c276e41bd1bb3c6b5b48fff8454f0d8efcec9bcab2750a8e323b8e124378aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689632039964849675,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7cf8c3ddacdfb41993478ac6d006a40f225edf27e892ef3abb11ca187031c21,PodSandboxId:02b64e405ec2f1e30b4c5076af600bc0befe5666c6ec033c2b33c3cfcb3e7a05,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689632039746546344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2f313b3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eab294c6-8154-4bae-9719-d23a32a39255 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.907973184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fe37a1de-5a2a-4a45-ad7d-15bdb2bf9cdd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.908040315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fe37a1de-5a2a-4a45-ad7d-15bdb2bf9cdd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:17:40 multinode-009530 crio[715]: time="2023-07-17 22:17:40.908316256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:13e45ae02fd5437993b59f4cbc90e87dcba01034c2ed312e93771f8bf6605dbf,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689632077985655453,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7dfa53d3f6b9b72e9ba04335742a4c9024d4d2bf2dd0be854464989f7a0f10,PodSandboxId:cc1b57f90ae120c6294ddef446286d7f72c9de4feab14d3a10be1a3867f5ade4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1689632055394993503,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-67b7f59bb-p72ln,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: aecc37f7-73f7-490b-9b82-bf330600bf41,},Annotations:map[string]string{io.kubernetes.container.hash: a749282e,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97f01e9aa81bc38ecf8c80440fc802c5091c7b471b05f2a3539e295a98878bda,PodSandboxId:4bfa7659fa061be2415d8c0f3b1c4c85c3bc223ac70eda4ff505b1b6b0934824,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689632054268375628,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-z4fr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb1d992-a7b6-4259-ba61-dc4092c65c44,},Annotations:map[string]string{io.kubernetes.container.hash: e80e6552,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e023e88b0df5925b1df58e9297423a5c922b86d30c7f9b94d6bdfee5a3139e,PodSandboxId:7cc6d72318a44cf6c24eab948b18e448e36e99064fa0938ae58fdd4ee7c25e04,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1689632049211376256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gh4hn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: d474f5c5-bd94-411b-8d69-b3871c2b5653,},Annotations:map[string]string{io.kubernetes.container.hash: 15eb87d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da819367706785fe6e31ab060f48f4236429ff2f75dd24db4d3e8e8c51530e4,PodSandboxId:d3e580f9c4f876b2fa9569c5dfe0f0b88e7fd148ddcc0789eebd2903bd1a6a24,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689632047374786885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5spw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4bf0eb3-126a-463e-a670-b4793e1
c5ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 2b8f1632,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d24ab8eef18227acbdcd9dc750b9554a4aad8ebf8b701d8eaec30709439626,PodSandboxId:e2eba808a35fd5557c354d2cf222f610f081891115415003bdc7bccda5083839,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1689632046882504546,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f48e9c-2b37-4edc-89e4-d032cac0d
573,},Annotations:map[string]string{io.kubernetes.container.hash: 4c201429,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f472842f94838e4ac7494a9d8ce79419dadeecfc13643528dcdfaf4069ad403f,PodSandboxId:e5f5696eba245088453d497f93fa0175a8a3c7fb5542883785e500e9b75f795f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689632040365411958,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 036d300e0ec7bf28a26e0c644008bbd5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f07edfb41016bb157300a6c682757797cd01208d025a1e018c3a94523c82f60,PodSandboxId:916cf313eec7d1ba3bcbc5f89c1d72e348f8240a6740964399311d648c3cd473,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689632040246289267,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab77d0bbc5cf528d40fb1d6635b3acda,},Annotations:map[string]string{io.kubernetes.container.hash
: 6ce2a6aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ec027e7ae509e329d38ea01c30ba1d150bab6a6e9b74428a1ff4d10e66e835,PodSandboxId:4c276e41bd1bb3c6b5b48fff8454f0d8efcec9bcab2750a8e323b8e124378aa9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689632039964849675,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b61663949a18745a23bcf487c538f2,},Annotations:map[string]string{io.k
ubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7cf8c3ddacdfb41993478ac6d006a40f225edf27e892ef3abb11ca187031c21,PodSandboxId:02b64e405ec2f1e30b4c5076af600bc0befe5666c6ec033c2b33c3cfcb3e7a05,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689632039746546344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-009530,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e7615bd1aa66d6e32161e120c48180,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 2f313b3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fe37a1de-5a2a-4a45-ad7d-15bdb2bf9cdd name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	13e45ae02fd54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   e2eba808a35fd
	bd7dfa53d3f6b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   cc1b57f90ae12
	97f01e9aa81bc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   4bfa7659fa061
	22e023e88b0df       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      3 minutes ago       Running             kindnet-cni               1                   7cc6d72318a44
	4da8193677067       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      3 minutes ago       Running             kube-proxy                1                   d3e580f9c4f87
	78d24ab8eef18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   e2eba808a35fd
	f472842f94838       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      3 minutes ago       Running             kube-scheduler            1                   e5f5696eba245
	1f07edfb41016       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      3 minutes ago       Running             etcd                      1                   916cf313eec7d
	64ec027e7ae50       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      3 minutes ago       Running             kube-controller-manager   1                   4c276e41bd1bb
	b7cf8c3ddacdf       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      3 minutes ago       Running             kube-apiserver            1                   02b64e405ec2f
	
	* 
	* ==> coredns [97f01e9aa81bc38ecf8c80440fc802c5091c7b471b05f2a3539e295a98878bda] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32938 - 61899 "HINFO IN 575064687327702304.8318026286661078122. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013800001s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-009530
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-009530
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=multinode-009530
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_03_53_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:03:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-009530
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:17:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:14:35 +0000   Mon, 17 Jul 2023 22:03:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:14:35 +0000   Mon, 17 Jul 2023 22:03:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:14:35 +0000   Mon, 17 Jul 2023 22:03:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:14:35 +0000   Mon, 17 Jul 2023 22:14:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    multinode-009530
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a35bd1412ac04609a53c53355ebc2b8a
	  System UUID:                a35bd141-2ac0-4609-a53c-53355ebc2b8a
	  Boot ID:                    114e6697-e4e6-4e61-b497-3cc5196c40c2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-p72ln                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5d78c9869d-z4fr8                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-009530                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-gh4hn                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-009530             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-multinode-009530    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-m5spw                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-009530             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m33s                  kube-proxy       
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node multinode-009530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node multinode-009530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node multinode-009530 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node multinode-009530 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node multinode-009530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node multinode-009530 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-009530 event: Registered Node multinode-009530 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-009530 status is now: NodeReady
	  Normal  Starting                 3m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m43s (x8 over 3m43s)  kubelet          Node multinode-009530 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s (x8 over 3m43s)  kubelet          Node multinode-009530 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s (x7 over 3m43s)  kubelet          Node multinode-009530 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m24s                  node-controller  Node multinode-009530 event: Registered Node multinode-009530 in Controller
	
	
	Name:               multinode-009530-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-009530-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:15:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-009530-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:17:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:15:55 +0000   Mon, 17 Jul 2023 22:15:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:15:55 +0000   Mon, 17 Jul 2023 22:15:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:15:55 +0000   Mon, 17 Jul 2023 22:15:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:15:55 +0000   Mon, 17 Jul 2023 22:15:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    multinode-009530-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2dd9efaa1b6645328dd273aa339fce67
	  System UUID:                2dd9efaa-1b66-4532-8dd2-73aa339fce67
	  Boot ID:                    35cdeb57-f454-489f-afe5-67bf46ef891c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-vm296    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-4tb65              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-6rxv8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From        Message
	  ----     ------                   ----                 ----        -------
	  Normal   Starting                 12m                  kube-proxy  
	  Normal   Starting                 103s                 kube-proxy  
	  Normal   NodeHasSufficientMemory  12m (x5 over 13m)    kubelet     Node multinode-009530-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x5 over 13m)    kubelet     Node multinode-009530-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 13m)    kubelet     Node multinode-009530-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                  kubelet     Node multinode-009530-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m45s                kubelet     Node multinode-009530-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m (x2 over 3m)      kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 106s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s (x2 over 106s)  kubelet     Node multinode-009530-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x2 over 106s)  kubelet     Node multinode-009530-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x2 over 106s)  kubelet     Node multinode-009530-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  106s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                106s                 kubelet     Node multinode-009530-m02 status is now: NodeReady
	
	
	Name:               multinode-009530-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-009530-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:17:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-009530-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:17:36 +0000   Mon, 17 Jul 2023 22:17:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:17:36 +0000   Mon, 17 Jul 2023 22:17:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:17:36 +0000   Mon, 17 Jul 2023 22:17:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:17:36 +0000   Mon, 17 Jul 2023 22:17:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    multinode-009530-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 32a5f74f292e46859d99cf93d2795ec8
	  System UUID:                32a5f74f-292e-4685-9d99-cf93d2795ec8
	  Boot ID:                    7162397c-72d5-4198-b29d-4ce42dd67d13
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-zfwm6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kindnet-zldcf              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-jv9h4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-009530-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-009530-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-009530-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-009530-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-009530-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-009530-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-009530-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-009530-m03 status is now: NodeReady
	  Normal   NodeNotReady             70s                kubelet     Node multinode-009530-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        31s (x2 over 91s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-009530-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-009530-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-009530-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-009530-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Jul17 22:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073089] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.351834] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.349075] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140188] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.446409] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.527009] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.127039] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.153312] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.113976] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.224238] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +16.899740] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [1f07edfb41016bb157300a6c682757797cd01208d025a1e018c3a94523c82f60] <==
	* {"level":"info","ts":"2023-07-17T22:14:01.783Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T22:14:01.783Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-07-17T22:14:01.783Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:14:01.783Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:14:01.784Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:14:01.784Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2023-07-17T22:14:01.784Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2023-07-17T22:14:01.784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 switched to configuration voters=(15611694107784645026)"}
	{"level":"info","ts":"2023-07-17T22:14:01.785Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","added-peer-id":"d8a7e113a49009a2","added-peer-peer-urls":["https://192.168.39.222:2380"]}
	{"level":"info","ts":"2023-07-17T22:14:01.785Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:14:01.785Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:14:03.331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-17T22:14:03.331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-17T22:14:03.331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgPreVoteResp from d8a7e113a49009a2 at term 2"}
	{"level":"info","ts":"2023-07-17T22:14:03.331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became candidate at term 3"}
	{"level":"info","ts":"2023-07-17T22:14:03.331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgVoteResp from d8a7e113a49009a2 at term 3"}
	{"level":"info","ts":"2023-07-17T22:14:03.331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T22:14:03.331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d8a7e113a49009a2 elected leader d8a7e113a49009a2 at term 3"}
	{"level":"info","ts":"2023-07-17T22:14:03.335Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:14:03.335Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d8a7e113a49009a2","local-member-attributes":"{Name:multinode-009530 ClientURLs:[https://192.168.39.222:2379]}","request-path":"/0/members/d8a7e113a49009a2/attributes","cluster-id":"26257d506d5fabfb","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:14:03.336Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.222:2379"}
	{"level":"info","ts":"2023-07-17T22:14:03.337Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:14:03.337Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:14:03.337Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T22:14:03.338Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:17:41 up 4 min,  0 users,  load average: 0.17, 0.18, 0.09
	Linux multinode-009530 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [22e023e88b0df5925b1df58e9297423a5c922b86d30c7f9b94d6bdfee5a3139e] <==
	* I0717 22:17:10.945983       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0717 22:17:10.946063       1 main.go:227] handling current node
	I0717 22:17:10.946168       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0717 22:17:10.946176       1 main.go:250] Node multinode-009530-m02 has CIDR [10.244.1.0/24] 
	I0717 22:17:10.946328       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0717 22:17:10.946334       1 main.go:250] Node multinode-009530-m03 has CIDR [10.244.3.0/24] 
	I0717 22:17:20.951599       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0717 22:17:20.951650       1 main.go:227] handling current node
	I0717 22:17:20.951668       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0717 22:17:20.951674       1 main.go:250] Node multinode-009530-m02 has CIDR [10.244.1.0/24] 
	I0717 22:17:20.951792       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0717 22:17:20.951826       1 main.go:250] Node multinode-009530-m03 has CIDR [10.244.3.0/24] 
	I0717 22:17:30.964288       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0717 22:17:30.964338       1 main.go:227] handling current node
	I0717 22:17:30.964353       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0717 22:17:30.964360       1 main.go:250] Node multinode-009530-m02 has CIDR [10.244.1.0/24] 
	I0717 22:17:30.964484       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0717 22:17:30.964517       1 main.go:250] Node multinode-009530-m03 has CIDR [10.244.3.0/24] 
	I0717 22:17:40.980487       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I0717 22:17:40.980550       1 main.go:227] handling current node
	I0717 22:17:40.980571       1 main.go:223] Handling node with IPs: map[192.168.39.146:{}]
	I0717 22:17:40.980577       1 main.go:250] Node multinode-009530-m02 has CIDR [10.244.1.0/24] 
	I0717 22:17:40.980677       1 main.go:223] Handling node with IPs: map[192.168.39.205:{}]
	I0717 22:17:40.980682       1 main.go:250] Node multinode-009530-m03 has CIDR [10.244.2.0/24] 
	I0717 22:17:40.980736       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.205 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [b7cf8c3ddacdfb41993478ac6d006a40f225edf27e892ef3abb11ca187031c21] <==
	* I0717 22:14:04.892474       1 establishing_controller.go:76] Starting EstablishingController
	I0717 22:14:04.892506       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0717 22:14:04.892544       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0717 22:14:04.892571       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0717 22:14:05.009270       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 22:14:05.038946       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0717 22:14:05.043956       1 shared_informer.go:318] Caches are synced for configmaps
	I0717 22:14:05.049754       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 22:14:05.050736       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 22:14:05.051541       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0717 22:14:05.051589       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0717 22:14:05.057211       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0717 22:14:05.081748       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0717 22:14:05.081782       1 aggregator.go:152] initial CRD sync complete...
	I0717 22:14:05.081804       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 22:14:05.081812       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 22:14:05.081819       1 cache.go:39] Caches are synced for autoregister controller
	I0717 22:14:05.497206       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 22:14:05.862939       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 22:14:08.165697       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 22:14:08.308936       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 22:14:08.319796       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 22:14:08.399932       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 22:14:08.414366       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 22:14:55.507827       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [64ec027e7ae509e329d38ea01c30ba1d150bab6a6e9b74428a1ff4d10e66e835] <==
	* I0717 22:14:17.678695       1 shared_informer.go:318] Caches are synced for attach detach
	I0717 22:14:17.706768       1 shared_informer.go:318] Caches are synced for deployment
	I0717 22:14:17.715327       1 shared_informer.go:318] Caches are synced for disruption
	I0717 22:14:17.742216       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 22:14:17.784120       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 22:14:17.814631       1 shared_informer.go:318] Caches are synced for cronjob
	I0717 22:14:18.185477       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 22:14:18.206449       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 22:14:18.206518       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	W0717 22:14:56.468570       1 topologycache.go:232] Can't get CPU or zone information for multinode-009530-m03 node
	I0717 22:15:51.861263       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-zfwm6"
	W0717 22:15:54.860553       1 topologycache.go:232] Can't get CPU or zone information for multinode-009530-m03 node
	I0717 22:15:55.505636       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-009530-m02\" does not exist"
	W0717 22:15:55.509000       1 topologycache.go:232] Can't get CPU or zone information for multinode-009530-m03 node
	I0717 22:15:55.509315       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-58859" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-58859"
	I0717 22:15:55.526906       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-009530-m02" podCIDRs=[10.244.1.0/24]
	W0717 22:15:55.575157       1 topologycache.go:232] Can't get CPU or zone information for multinode-009530-m02 node
	W0717 22:16:31.441614       1 topologycache.go:232] Can't get CPU or zone information for multinode-009530-m02 node
	I0717 22:17:32.650309       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-vm296"
	W0717 22:17:35.662540       1 topologycache.go:232] Can't get CPU or zone information for multinode-009530-m02 node
	W0717 22:17:36.330722       1 topologycache.go:232] Can't get CPU or zone information for multinode-009530-m02 node
	I0717 22:17:36.331529       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-009530-m03\" does not exist"
	I0717 22:17:36.335852       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-zfwm6" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-zfwm6"
	I0717 22:17:36.348145       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-009530-m03" podCIDRs=[10.244.2.0/24]
	W0717 22:17:36.375440       1 topologycache.go:232] Can't get CPU or zone information for multinode-009530-m02 node
	
	* 
	* ==> kube-proxy [4da819367706785fe6e31ab060f48f4236429ff2f75dd24db4d3e8e8c51530e4] <==
	* I0717 22:14:08.090401       1 node.go:141] Successfully retrieved node IP: 192.168.39.222
	I0717 22:14:08.090562       1 server_others.go:110] "Detected node IP" address="192.168.39.222"
	I0717 22:14:08.090655       1 server_others.go:554] "Using iptables proxy"
	I0717 22:14:08.172898       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 22:14:08.173438       1 server_others.go:192] "Using iptables Proxier"
	I0717 22:14:08.173599       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 22:14:08.175204       1 server.go:658] "Version info" version="v1.27.3"
	I0717 22:14:08.175448       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:14:08.179552       1 config.go:188] "Starting service config controller"
	I0717 22:14:08.179942       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 22:14:08.180054       1 config.go:97] "Starting endpoint slice config controller"
	I0717 22:14:08.180192       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 22:14:08.183305       1 config.go:315] "Starting node config controller"
	I0717 22:14:08.183442       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 22:14:08.280267       1 shared_informer.go:318] Caches are synced for service config
	I0717 22:14:08.280328       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 22:14:08.284787       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [f472842f94838e4ac7494a9d8ce79419dadeecfc13643528dcdfaf4069ad403f] <==
	* I0717 22:14:02.280357       1 serving.go:348] Generated self-signed cert in-memory
	W0717 22:14:04.915911       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 22:14:04.916012       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 22:14:04.916026       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 22:14:04.916035       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 22:14:04.967569       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 22:14:04.967700       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:14:04.977146       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 22:14:04.977224       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:14:04.979859       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 22:14:04.979942       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 22:14:05.077549       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 22:13:31 UTC, ends at Mon 2023-07-17 22:17:41 UTC. --
	Jul 17 22:14:07 multinode-009530 kubelet[919]: E0717 22:14:07.519666     919 projected.go:198] Error preparing data for projected volume kube-api-access-ghzxp for pod default/busybox-67b7f59bb-p72ln: object "default"/"kube-root-ca.crt" not registered
	Jul 17 22:14:07 multinode-009530 kubelet[919]: E0717 22:14:07.519708     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aecc37f7-73f7-490b-9b82-bf330600bf41-kube-api-access-ghzxp podName:aecc37f7-73f7-490b-9b82-bf330600bf41 nodeName:}" failed. No retries permitted until 2023-07-17 22:14:09.519695022 +0000 UTC m=+11.011859144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ghzxp" (UniqueName: "kubernetes.io/projected/aecc37f7-73f7-490b-9b82-bf330600bf41-kube-api-access-ghzxp") pod "busybox-67b7f59bb-p72ln" (UID: "aecc37f7-73f7-490b-9b82-bf330600bf41") : object "default"/"kube-root-ca.crt" not registered
	Jul 17 22:14:07 multinode-009530 kubelet[919]: E0717 22:14:07.774505     919 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-67b7f59bb-p72ln" podUID=aecc37f7-73f7-490b-9b82-bf330600bf41
	Jul 17 22:14:07 multinode-009530 kubelet[919]: E0717 22:14:07.774956     919 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5d78c9869d-z4fr8" podUID=1fb1d992-a7b6-4259-ba61-dc4092c65c44
	Jul 17 22:14:09 multinode-009530 kubelet[919]: E0717 22:14:09.435751     919 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 17 22:14:09 multinode-009530 kubelet[919]: E0717 22:14:09.435856     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1fb1d992-a7b6-4259-ba61-dc4092c65c44-config-volume podName:1fb1d992-a7b6-4259-ba61-dc4092c65c44 nodeName:}" failed. No retries permitted until 2023-07-17 22:14:13.435839456 +0000 UTC m=+14.928003581 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1fb1d992-a7b6-4259-ba61-dc4092c65c44-config-volume") pod "coredns-5d78c9869d-z4fr8" (UID: "1fb1d992-a7b6-4259-ba61-dc4092c65c44") : object "kube-system"/"coredns" not registered
	Jul 17 22:14:09 multinode-009530 kubelet[919]: E0717 22:14:09.536141     919 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jul 17 22:14:09 multinode-009530 kubelet[919]: E0717 22:14:09.536204     919 projected.go:198] Error preparing data for projected volume kube-api-access-ghzxp for pod default/busybox-67b7f59bb-p72ln: object "default"/"kube-root-ca.crt" not registered
	Jul 17 22:14:09 multinode-009530 kubelet[919]: E0717 22:14:09.536293     919 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aecc37f7-73f7-490b-9b82-bf330600bf41-kube-api-access-ghzxp podName:aecc37f7-73f7-490b-9b82-bf330600bf41 nodeName:}" failed. No retries permitted until 2023-07-17 22:14:13.536271977 +0000 UTC m=+15.028436114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ghzxp" (UniqueName: "kubernetes.io/projected/aecc37f7-73f7-490b-9b82-bf330600bf41-kube-api-access-ghzxp") pod "busybox-67b7f59bb-p72ln" (UID: "aecc37f7-73f7-490b-9b82-bf330600bf41") : object "default"/"kube-root-ca.crt" not registered
	Jul 17 22:14:09 multinode-009530 kubelet[919]: E0717 22:14:09.774165     919 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-67b7f59bb-p72ln" podUID=aecc37f7-73f7-490b-9b82-bf330600bf41
	Jul 17 22:14:09 multinode-009530 kubelet[919]: E0717 22:14:09.774307     919 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5d78c9869d-z4fr8" podUID=1fb1d992-a7b6-4259-ba61-dc4092c65c44
	Jul 17 22:14:10 multinode-009530 kubelet[919]: I0717 22:14:10.803726     919 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 17 22:14:37 multinode-009530 kubelet[919]: I0717 22:14:37.951623     919 scope.go:115] "RemoveContainer" containerID="78d24ab8eef18227acbdcd9dc750b9554a4aad8ebf8b701d8eaec30709439626"
	Jul 17 22:14:58 multinode-009530 kubelet[919]: E0717 22:14:58.792583     919 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 22:14:58 multinode-009530 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 22:14:58 multinode-009530 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 22:14:58 multinode-009530 kubelet[919]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 22:15:58 multinode-009530 kubelet[919]: E0717 22:15:58.798901     919 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 22:15:58 multinode-009530 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 22:15:58 multinode-009530 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 22:15:58 multinode-009530 kubelet[919]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 22:16:58 multinode-009530 kubelet[919]: E0717 22:16:58.794699     919 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 22:16:58 multinode-009530 kubelet[919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 22:16:58 multinode-009530 kubelet[919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 22:16:58 multinode-009530 kubelet[919]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-009530 -n multinode-009530
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-009530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (681.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 stop
E0717 22:18:11.892688   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-009530 stop: exit status 82 (2m1.230633761s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-009530"  ...
	* Stopping node "multinode-009530"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-009530 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-009530 status: exit status 3 (18.700914992s)

                                                
                                                
-- stdout --
	multinode-009530
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-009530-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:20:03.889858   40777 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0717 22:20:03.889908   40777 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-009530 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-009530 -n multinode-009530
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-009530 -n multinode-009530: exit status 3 (3.178538098s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:20:07.249975   40859 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0717 22:20:07.249996   40859 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-009530" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.11s)

                                                
                                    
x
+
TestPreload (276.59s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-136918 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0717 22:30:31.146751   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 22:30:31.747481   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-136918 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m14.862927009s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-136918 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-136918 image pull gcr.io/k8s-minikube/busybox: (1.037423384s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-136918
E0717 22:32:28.101666   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-136918: exit status 82 (2m1.123901241s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-136918"  ...
	* Stopping node "test-preload-136918"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-136918 failed: exit status 82
panic.go:522: *** TestPreload FAILED at 2023-07-17 22:32:39.158773459 +0000 UTC m=+3122.948559452
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-136918 -n test-preload-136918
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-136918 -n test-preload-136918: exit status 3 (18.621950578s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:32:57.777876   43785 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	E0717 22:32:57.777897   43785 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-136918" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-136918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-136918
--- FAIL: TestPreload (276.59s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (182.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.6.2.3197865141.exe start -p running-upgrade-730116 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0717 22:35:31.747992   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.6.2.3197865141.exe start -p running-upgrade-730116 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m19.6632804s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-730116 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0717 22:37:28.101596   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-730116 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (41.309154368s)

                                                
                                                
-- stdout --
	* [running-upgrade-730116] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-730116 in cluster running-upgrade-730116
	* Updating the running kvm2 "running-upgrade-730116" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:37:27.250221   48513 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:37:27.250334   48513 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:37:27.250342   48513 out.go:309] Setting ErrFile to fd 2...
	I0717 22:37:27.250346   48513 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:37:27.250537   48513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:37:27.251086   48513 out.go:303] Setting JSON to false
	I0717 22:37:27.251934   48513 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8399,"bootTime":1689625048,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:37:27.251991   48513 start.go:138] virtualization: kvm guest
	I0717 22:37:27.254164   48513 out.go:177] * [running-upgrade-730116] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:37:27.255647   48513 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:37:27.257183   48513 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:37:27.255649   48513 notify.go:220] Checking for updates...
	I0717 22:37:27.258690   48513 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:37:27.260060   48513 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:37:27.261417   48513 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:37:27.262787   48513 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:37:27.266727   48513 config.go:182] Loaded profile config "running-upgrade-730116": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0717 22:37:27.266753   48513 start_flags.go:683] config upgrade: Driver=kvm2
	I0717 22:37:27.266767   48513 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 22:37:27.266875   48513 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/running-upgrade-730116/config.json ...
	I0717 22:37:27.267429   48513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:37:27.267481   48513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:37:27.283393   48513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46093
	I0717 22:37:27.283978   48513 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:37:27.284680   48513 main.go:141] libmachine: Using API Version  1
	I0717 22:37:27.284704   48513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:37:27.285162   48513 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:37:27.285369   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .DriverName
	I0717 22:37:27.287552   48513 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 22:37:27.290668   48513 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:37:27.291131   48513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:37:27.291206   48513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:37:27.311592   48513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35745
	I0717 22:37:27.312148   48513 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:37:27.312873   48513 main.go:141] libmachine: Using API Version  1
	I0717 22:37:27.312904   48513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:37:27.313393   48513 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:37:27.313670   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .DriverName
	I0717 22:37:27.359208   48513 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 22:37:27.360518   48513 start.go:298] selected driver: kvm2
	I0717 22:37:27.360534   48513 start.go:880] validating driver "kvm2" against &{Name:running-upgrade-730116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.94 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:37:27.360661   48513 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:37:27.361294   48513 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:37:27.361373   48513 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 22:37:27.376314   48513 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 22:37:27.376741   48513 cni.go:84] Creating CNI manager for ""
	I0717 22:37:27.376767   48513 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0717 22:37:27.376781   48513 start_flags.go:319] config:
	{Name:running-upgrade-730116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.94 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:37:27.377016   48513 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:37:27.379006   48513 out.go:177] * Starting control plane node running-upgrade-730116 in cluster running-upgrade-730116
	I0717 22:37:27.380386   48513 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0717 22:37:27.408800   48513 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0717 22:37:27.408936   48513 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/running-upgrade-730116/config.json ...
	I0717 22:37:27.409103   48513 cache.go:107] acquiring lock: {Name:mk01bc74ef42cddd6cd05b75ec900cb2a05e15de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:37:27.409139   48513 cache.go:107] acquiring lock: {Name:mkb3da569a75c44d9b58a1b4928d64780ad0d276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:37:27.409130   48513 cache.go:107] acquiring lock: {Name:mk3da5422adaafd4aeee39d11977ad5f399b403c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:37:27.409206   48513 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 22:37:27.409219   48513 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 126.928µs
	I0717 22:37:27.409230   48513 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 22:37:27.409228   48513 start.go:365] acquiring machines lock for running-upgrade-730116: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:37:27.409247   48513 cache.go:107] acquiring lock: {Name:mk715c8bbf04f2c1484f356378a047fa52d7b1f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:37:27.409261   48513 cache.go:107] acquiring lock: {Name:mk39edc4b63543c3d3dfdbf9feea84cf2d58bce4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:37:27.409323   48513 cache.go:107] acquiring lock: {Name:mk4995e82690518a46844401784049351035af2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:37:27.409328   48513 cache.go:107] acquiring lock: {Name:mkb875e1170998479021cbbc15053fd8295ed082 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:37:27.409298   48513 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 22:37:27.409413   48513 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 22:37:27.409404   48513 cache.go:107] acquiring lock: {Name:mk57020bc59b5899a6112fa7852e437d2af29822 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:37:27.409432   48513 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0717 22:37:27.409495   48513 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0717 22:37:27.409280   48513 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0717 22:37:27.409664   48513 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0717 22:37:27.409280   48513 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0717 22:37:27.410910   48513 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0717 22:37:27.410914   48513 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0717 22:37:27.411013   48513 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 22:37:27.411072   48513 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 22:37:27.410981   48513 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0717 22:37:27.411142   48513 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0717 22:37:27.411060   48513 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0717 22:37:27.577468   48513 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0717 22:37:27.580659   48513 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0717 22:37:27.581975   48513 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 22:37:27.583372   48513 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0717 22:37:27.584358   48513 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0717 22:37:27.596929   48513 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0717 22:37:27.597538   48513 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0717 22:37:27.647099   48513 cache.go:157] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0717 22:37:27.647123   48513 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 237.877603ms
	I0717 22:37:27.647133   48513 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0717 22:37:28.099241   48513 cache.go:157] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0717 22:37:28.099268   48513 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 689.943663ms
	I0717 22:37:28.099285   48513 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0717 22:37:28.455548   48513 cache.go:157] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0717 22:37:28.455578   48513 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.046203629s
	I0717 22:37:28.455592   48513 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0717 22:37:28.843395   48513 cache.go:157] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0717 22:37:28.843425   48513 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.434302565s
	I0717 22:37:28.843480   48513 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0717 22:37:28.873837   48513 cache.go:157] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0717 22:37:28.873881   48513 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.464723874s
	I0717 22:37:28.873900   48513 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0717 22:37:29.314737   48513 cache.go:157] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0717 22:37:29.314771   48513 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.905650026s
	I0717 22:37:29.314787   48513 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0717 22:37:29.395808   48513 cache.go:157] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0717 22:37:29.395835   48513 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.986519115s
	I0717 22:37:29.395846   48513 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0717 22:37:29.395872   48513 cache.go:87] Successfully saved all images to host disk.
	I0717 22:38:04.758950   48513 start.go:369] acquired machines lock for "running-upgrade-730116" in 37.349674704s
	I0717 22:38:04.758999   48513 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:38:04.759009   48513 fix.go:54] fixHost starting: minikube
	I0717 22:38:04.759422   48513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:38:04.759461   48513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:38:04.777780   48513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34165
	I0717 22:38:04.778240   48513 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:38:04.778738   48513 main.go:141] libmachine: Using API Version  1
	I0717 22:38:04.778765   48513 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:38:04.779183   48513 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:38:04.779383   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .DriverName
	I0717 22:38:04.779546   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetState
	I0717 22:38:04.781492   48513 fix.go:102] recreateIfNeeded on running-upgrade-730116: state=Running err=<nil>
	W0717 22:38:04.781566   48513 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:38:04.783877   48513 out.go:177] * Updating the running kvm2 "running-upgrade-730116" VM ...
	I0717 22:38:04.785432   48513 machine.go:88] provisioning docker machine ...
	I0717 22:38:04.785460   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .DriverName
	I0717 22:38:04.785777   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetMachineName
	I0717 22:38:04.785971   48513 buildroot.go:166] provisioning hostname "running-upgrade-730116"
	I0717 22:38:04.785994   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetMachineName
	I0717 22:38:04.786139   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHHostname
	I0717 22:38:04.789099   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:04.789695   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b7:cb", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:35:39 +0000 UTC Type:0 Mac:52:54:00:43:b7:cb Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:running-upgrade-730116 Clientid:01:52:54:00:43:b7:cb}
	I0717 22:38:04.789729   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined IP address 192.168.50.94 and MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:04.789954   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHPort
	I0717 22:38:04.790165   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHKeyPath
	I0717 22:38:04.790355   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHKeyPath
	I0717 22:38:04.790535   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHUsername
	I0717 22:38:04.790720   48513 main.go:141] libmachine: Using SSH client type: native
	I0717 22:38:04.791145   48513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0717 22:38:04.791161   48513 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-730116 && echo "running-upgrade-730116" | sudo tee /etc/hostname
	I0717 22:38:04.931170   48513 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-730116
	
	I0717 22:38:04.931205   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHHostname
	I0717 22:38:05.255266   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:05.255651   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b7:cb", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:35:39 +0000 UTC Type:0 Mac:52:54:00:43:b7:cb Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:running-upgrade-730116 Clientid:01:52:54:00:43:b7:cb}
	I0717 22:38:05.255694   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined IP address 192.168.50.94 and MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:05.255831   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHPort
	I0717 22:38:05.256032   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHKeyPath
	I0717 22:38:05.256210   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHKeyPath
	I0717 22:38:05.256351   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHUsername
	I0717 22:38:05.256533   48513 main.go:141] libmachine: Using SSH client type: native
	I0717 22:38:05.257160   48513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0717 22:38:05.257188   48513 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-730116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-730116/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-730116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:38:05.415150   48513 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:38:05.415201   48513 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:38:05.415237   48513 buildroot.go:174] setting up certificates
	I0717 22:38:05.415247   48513 provision.go:83] configureAuth start
	I0717 22:38:05.415262   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetMachineName
	I0717 22:38:05.415489   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetIP
	I0717 22:38:05.418721   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:05.419187   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b7:cb", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:35:39 +0000 UTC Type:0 Mac:52:54:00:43:b7:cb Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:running-upgrade-730116 Clientid:01:52:54:00:43:b7:cb}
	I0717 22:38:05.419283   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined IP address 192.168.50.94 and MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:05.419537   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHHostname
	I0717 22:38:05.422225   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:05.422727   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b7:cb", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:35:39 +0000 UTC Type:0 Mac:52:54:00:43:b7:cb Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:running-upgrade-730116 Clientid:01:52:54:00:43:b7:cb}
	I0717 22:38:05.422757   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined IP address 192.168.50.94 and MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:05.422930   48513 provision.go:138] copyHostCerts
	I0717 22:38:05.422986   48513 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:38:05.422997   48513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:38:05.423063   48513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:38:05.423175   48513 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:38:05.423181   48513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:38:05.423210   48513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:38:05.423281   48513 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:38:05.423287   48513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:38:05.423311   48513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:38:05.423367   48513 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-730116 san=[192.168.50.94 192.168.50.94 localhost 127.0.0.1 minikube running-upgrade-730116]
	I0717 22:38:05.517175   48513 provision.go:172] copyRemoteCerts
	I0717 22:38:05.517231   48513 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:38:05.517254   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHHostname
	I0717 22:38:05.520337   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:05.520790   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b7:cb", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:35:39 +0000 UTC Type:0 Mac:52:54:00:43:b7:cb Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:running-upgrade-730116 Clientid:01:52:54:00:43:b7:cb}
	I0717 22:38:05.520832   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined IP address 192.168.50.94 and MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:05.521079   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHPort
	I0717 22:38:05.521295   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHKeyPath
	I0717 22:38:05.521438   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHUsername
	I0717 22:38:05.521710   48513 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/running-upgrade-730116/id_rsa Username:docker}
	I0717 22:38:05.624115   48513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:38:05.641844   48513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 22:38:05.659093   48513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:38:05.677984   48513 provision.go:86] duration metric: configureAuth took 262.725207ms
	I0717 22:38:05.678011   48513 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:38:05.678176   48513 config.go:182] Loaded profile config "running-upgrade-730116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0717 22:38:05.678267   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHHostname
	I0717 22:38:05.681019   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:05.681396   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b7:cb", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:35:39 +0000 UTC Type:0 Mac:52:54:00:43:b7:cb Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:running-upgrade-730116 Clientid:01:52:54:00:43:b7:cb}
	I0717 22:38:05.681449   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined IP address 192.168.50.94 and MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:05.681599   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHPort
	I0717 22:38:05.681801   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHKeyPath
	I0717 22:38:05.681960   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHKeyPath
	I0717 22:38:05.682094   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHUsername
	I0717 22:38:05.682370   48513 main.go:141] libmachine: Using SSH client type: native
	I0717 22:38:05.682969   48513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0717 22:38:05.682999   48513 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:38:06.314639   48513 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:38:06.314665   48513 machine.go:91] provisioned docker machine in 1.529217607s
	I0717 22:38:06.314677   48513 start.go:300] post-start starting for "running-upgrade-730116" (driver="kvm2")
	I0717 22:38:06.314688   48513 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:38:06.314710   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .DriverName
	I0717 22:38:06.314987   48513 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:38:06.315014   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHHostname
	I0717 22:38:06.317747   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:06.318169   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b7:cb", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:35:39 +0000 UTC Type:0 Mac:52:54:00:43:b7:cb Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:running-upgrade-730116 Clientid:01:52:54:00:43:b7:cb}
	I0717 22:38:06.318201   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined IP address 192.168.50.94 and MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:06.318342   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHPort
	I0717 22:38:06.318540   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHKeyPath
	I0717 22:38:06.318689   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHUsername
	I0717 22:38:06.318802   48513 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/running-upgrade-730116/id_rsa Username:docker}
	I0717 22:38:06.407767   48513 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:38:06.413072   48513 info.go:137] Remote host: Buildroot 2019.02.7
	I0717 22:38:06.413101   48513 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:38:06.413180   48513 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:38:06.413279   48513 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:38:06.413393   48513 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:38:06.420834   48513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:38:06.438526   48513 start.go:303] post-start completed in 123.834093ms
	I0717 22:38:06.438553   48513 fix.go:56] fixHost completed within 1.679543311s
	I0717 22:38:06.438579   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHHostname
	I0717 22:38:06.441808   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:06.442262   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b7:cb", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:35:39 +0000 UTC Type:0 Mac:52:54:00:43:b7:cb Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:running-upgrade-730116 Clientid:01:52:54:00:43:b7:cb}
	I0717 22:38:06.442301   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined IP address 192.168.50.94 and MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:06.442494   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHPort
	I0717 22:38:06.442701   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHKeyPath
	I0717 22:38:06.442898   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHKeyPath
	I0717 22:38:06.443057   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHUsername
	I0717 22:38:06.443280   48513 main.go:141] libmachine: Using SSH client type: native
	I0717 22:38:06.443948   48513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.94 22 <nil> <nil>}
	I0717 22:38:06.443976   48513 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 22:38:06.572410   48513 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689633486.568512037
	
	I0717 22:38:06.572436   48513 fix.go:206] guest clock: 1689633486.568512037
	I0717 22:38:06.572448   48513 fix.go:219] Guest: 2023-07-17 22:38:06.568512037 +0000 UTC Remote: 2023-07-17 22:38:06.438557379 +0000 UTC m=+39.228374618 (delta=129.954658ms)
	I0717 22:38:06.572473   48513 fix.go:190] guest clock delta is within tolerance: 129.954658ms
	I0717 22:38:06.572479   48513 start.go:83] releasing machines lock for "running-upgrade-730116", held for 1.813503206s
	I0717 22:38:06.572519   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .DriverName
	I0717 22:38:06.572787   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetIP
	I0717 22:38:06.576345   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:06.576898   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b7:cb", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:35:39 +0000 UTC Type:0 Mac:52:54:00:43:b7:cb Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:running-upgrade-730116 Clientid:01:52:54:00:43:b7:cb}
	I0717 22:38:06.576931   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined IP address 192.168.50.94 and MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:06.577383   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .DriverName
	I0717 22:38:06.579204   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .DriverName
	I0717 22:38:06.579441   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .DriverName
	I0717 22:38:06.579518   48513 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:38:06.579567   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHHostname
	I0717 22:38:06.580012   48513 ssh_runner.go:195] Run: cat /version.json
	I0717 22:38:06.580039   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHHostname
	I0717 22:38:06.584851   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:06.585600   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:06.585633   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b7:cb", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:35:39 +0000 UTC Type:0 Mac:52:54:00:43:b7:cb Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:running-upgrade-730116 Clientid:01:52:54:00:43:b7:cb}
	I0717 22:38:06.585651   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined IP address 192.168.50.94 and MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:06.585859   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHPort
	I0717 22:38:06.586050   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHKeyPath
	I0717 22:38:06.586151   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b7:cb", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:35:39 +0000 UTC Type:0 Mac:52:54:00:43:b7:cb Iaid: IPaddr:192.168.50.94 Prefix:24 Hostname:running-upgrade-730116 Clientid:01:52:54:00:43:b7:cb}
	I0717 22:38:06.586195   48513 main.go:141] libmachine: (running-upgrade-730116) DBG | domain running-upgrade-730116 has defined IP address 192.168.50.94 and MAC address 52:54:00:43:b7:cb in network minikube-net
	I0717 22:38:06.586243   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHUsername
	I0717 22:38:06.586393   48513 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/running-upgrade-730116/id_rsa Username:docker}
	I0717 22:38:06.586455   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHPort
	I0717 22:38:06.586594   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHKeyPath
	I0717 22:38:06.586731   48513 main.go:141] libmachine: (running-upgrade-730116) Calling .GetSSHUsername
	I0717 22:38:06.586866   48513 sshutil.go:53] new ssh client: &{IP:192.168.50.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/running-upgrade-730116/id_rsa Username:docker}
	W0717 22:38:06.705275   48513 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 22:38:06.705349   48513 ssh_runner.go:195] Run: systemctl --version
	I0717 22:38:06.712120   48513 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:38:06.834896   48513 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:38:06.842508   48513 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:38:06.842583   48513 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:38:06.850588   48513 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 22:38:06.850618   48513 start.go:466] detecting cgroup driver to use...
	I0717 22:38:06.850691   48513 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:38:06.863665   48513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:38:06.874336   48513 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:38:06.874402   48513 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:38:06.885556   48513 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:38:06.897952   48513 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 22:38:06.913100   48513 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 22:38:06.913192   48513 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:38:07.097749   48513 docker.go:212] disabling docker service ...
	I0717 22:38:07.097818   48513 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:38:08.126201   48513 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.028357534s)
	I0717 22:38:08.126272   48513 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:38:08.143246   48513 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:38:08.297885   48513 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:38:08.465890   48513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:38:08.480011   48513 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:38:08.494636   48513 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0717 22:38:08.494708   48513 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:38:08.506341   48513 out.go:177] 
	W0717 22:38:08.507935   48513 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0717 22:38:08.507957   48513 out.go:239] * 
	* 
	W0717 22:38:08.508820   48513 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 22:38:08.510378   48513 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-730116 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-17 22:38:08.531207436 +0000 UTC m=+3452.320993434
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-730116 -n running-upgrade-730116
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-730116 -n running-upgrade-730116: exit status 4 (274.814552ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:38:08.768750   49099 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-730116" does not appear in /home/jenkins/minikube-integration/16899-15759/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-730116" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-730116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-730116
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-730116: (1.176882159s)
--- FAIL: TestRunningBinaryUpgrade (182.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (304.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.6.2.4205550904.exe start -p stopped-upgrade-132802 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.6.2.4205550904.exe start -p stopped-upgrade-132802 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m22.201316753s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.6.2.4205550904.exe -p stopped-upgrade-132802 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.6.2.4205550904.exe -p stopped-upgrade-132802 stop: (1m32.883704831s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-132802 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-132802 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m9.088417367s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-132802] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-132802 in cluster stopped-upgrade-132802
	* Restarting existing kvm2 VM for "stopped-upgrade-132802" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:38:53.022247   49673 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:38:53.022420   49673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:38:53.022428   49673 out.go:309] Setting ErrFile to fd 2...
	I0717 22:38:53.022433   49673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:38:53.022626   49673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:38:53.023155   49673 out.go:303] Setting JSON to false
	I0717 22:38:53.023995   49673 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8485,"bootTime":1689625048,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:38:53.024049   49673 start.go:138] virtualization: kvm guest
	I0717 22:38:53.026515   49673 out.go:177] * [stopped-upgrade-132802] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:38:53.028695   49673 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:38:53.028702   49673 notify.go:220] Checking for updates...
	I0717 22:38:53.030295   49673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:38:53.032137   49673 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:38:53.033937   49673 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:38:53.035535   49673 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:38:53.037125   49673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:38:53.038862   49673 config.go:182] Loaded profile config "stopped-upgrade-132802": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0717 22:38:53.038880   49673 start_flags.go:683] config upgrade: Driver=kvm2
	I0717 22:38:53.038888   49673 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 22:38:53.038950   49673 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/stopped-upgrade-132802/config.json ...
	I0717 22:38:53.039493   49673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:38:53.039534   49673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:38:53.056520   49673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34991
	I0717 22:38:53.056935   49673 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:38:53.057576   49673 main.go:141] libmachine: Using API Version  1
	I0717 22:38:53.057601   49673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:38:53.058088   49673 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:38:53.058299   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .DriverName
	I0717 22:38:53.060713   49673 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 22:38:53.062306   49673 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:38:53.062725   49673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:38:53.062773   49673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:38:53.078355   49673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36697
	I0717 22:38:53.078807   49673 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:38:53.079281   49673 main.go:141] libmachine: Using API Version  1
	I0717 22:38:53.079301   49673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:38:53.079619   49673 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:38:53.079816   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .DriverName
	I0717 22:38:53.117344   49673 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 22:38:53.118718   49673 start.go:298] selected driver: kvm2
	I0717 22:38:53.118731   49673 start.go:880] validating driver "kvm2" against &{Name:stopped-upgrade-132802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.42 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:38:53.118807   49673 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:38:53.119543   49673 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:38:53.119615   49673 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 22:38:53.134640   49673 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 22:38:53.134957   49673 cni.go:84] Creating CNI manager for ""
	I0717 22:38:53.134970   49673 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0717 22:38:53.134978   49673 start_flags.go:319] config:
	{Name:stopped-upgrade-132802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.42 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:38:53.135129   49673 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:38:53.137090   49673 out.go:177] * Starting control plane node stopped-upgrade-132802 in cluster stopped-upgrade-132802
	I0717 22:38:53.138535   49673 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0717 22:38:53.165387   49673 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0717 22:38:53.165554   49673 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/stopped-upgrade-132802/config.json ...
	I0717 22:38:53.165659   49673 cache.go:107] acquiring lock: {Name:mk01bc74ef42cddd6cd05b75ec900cb2a05e15de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:38:53.165682   49673 cache.go:107] acquiring lock: {Name:mk3da5422adaafd4aeee39d11977ad5f399b403c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:38:53.165744   49673 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 22:38:53.165776   49673 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0717 22:38:53.165789   49673 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 118.778µs
	I0717 22:38:53.165795   49673 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 104.779µs
	I0717 22:38:53.165808   49673 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 22:38:53.165697   49673 cache.go:107] acquiring lock: {Name:mkb3da569a75c44d9b58a1b4928d64780ad0d276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:38:53.165830   49673 cache.go:107] acquiring lock: {Name:mk715c8bbf04f2c1484f356378a047fa52d7b1f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:38:53.165795   49673 cache.go:107] acquiring lock: {Name:mk57020bc59b5899a6112fa7852e437d2af29822 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:38:53.165857   49673 start.go:365] acquiring machines lock for stopped-upgrade-132802: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:38:53.165890   49673 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0717 22:38:53.165898   49673 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0717 22:38:53.165808   49673 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0717 22:38:53.165906   49673 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 81.76µs
	I0717 22:38:53.165906   49673 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 214.849µs
	I0717 22:38:53.165921   49673 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0717 22:38:53.165926   49673 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0717 22:38:53.165882   49673 cache.go:107] acquiring lock: {Name:mkb875e1170998479021cbbc15053fd8295ed082 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:38:53.165891   49673 cache.go:107] acquiring lock: {Name:mk4995e82690518a46844401784049351035af2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:38:53.165932   49673 cache.go:107] acquiring lock: {Name:mk39edc4b63543c3d3dfdbf9feea84cf2d58bce4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:38:53.165955   49673 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0717 22:38:53.165968   49673 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 248.61µs
	I0717 22:38:53.165988   49673 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0717 22:38:53.165996   49673 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0717 22:38:53.166006   49673 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 161.686µs
	I0717 22:38:53.166019   49673 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0717 22:38:53.166012   49673 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0717 22:38:53.166030   49673 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 185.618µs
	I0717 22:38:53.166028   49673 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0717 22:38:53.166038   49673 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0717 22:38:53.166043   49673 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 154.32µs
	I0717 22:38:53.166056   49673 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0717 22:38:53.166063   49673 cache.go:87] Successfully saved all images to host disk.
	I0717 22:39:20.202834   49673 start.go:369] acquired machines lock for "stopped-upgrade-132802" in 27.036942367s
	I0717 22:39:20.202881   49673 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:39:20.202889   49673 fix.go:54] fixHost starting: minikube
	I0717 22:39:20.203314   49673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:39:20.203363   49673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:39:20.221815   49673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0717 22:39:20.222242   49673 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:39:20.222799   49673 main.go:141] libmachine: Using API Version  1
	I0717 22:39:20.222831   49673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:39:20.223734   49673 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:39:20.223967   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .DriverName
	I0717 22:39:20.225157   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetState
	I0717 22:39:20.227443   49673 fix.go:102] recreateIfNeeded on stopped-upgrade-132802: state=Stopped err=<nil>
	I0717 22:39:20.227471   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .DriverName
	W0717 22:39:20.227620   49673 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:39:20.229635   49673 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-132802" ...
	I0717 22:39:20.230948   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .Start
	I0717 22:39:20.231101   49673 main.go:141] libmachine: (stopped-upgrade-132802) Ensuring networks are active...
	I0717 22:39:20.232235   49673 main.go:141] libmachine: (stopped-upgrade-132802) Ensuring network default is active
	I0717 22:39:20.232491   49673 main.go:141] libmachine: (stopped-upgrade-132802) Ensuring network minikube-net is active
	I0717 22:39:20.232911   49673 main.go:141] libmachine: (stopped-upgrade-132802) Getting domain xml...
	I0717 22:39:20.233481   49673 main.go:141] libmachine: (stopped-upgrade-132802) Creating domain...
	I0717 22:39:20.651900   49673 main.go:141] libmachine: (stopped-upgrade-132802) Waiting to get IP...
	I0717 22:39:20.653050   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:20.653512   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:20.653684   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:20.653546   49961 retry.go:31] will retry after 228.86586ms: waiting for machine to come up
	I0717 22:39:20.884619   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:20.885128   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:20.885155   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:20.885089   49961 retry.go:31] will retry after 254.083015ms: waiting for machine to come up
	I0717 22:39:21.411197   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:21.411719   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:21.411746   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:21.411674   49961 retry.go:31] will retry after 486.733772ms: waiting for machine to come up
	I0717 22:39:21.900098   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:21.900546   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:21.900593   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:21.900523   49961 retry.go:31] will retry after 423.232372ms: waiting for machine to come up
	I0717 22:39:22.325161   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:22.325740   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:22.325775   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:22.325692   49961 retry.go:31] will retry after 746.304835ms: waiting for machine to come up
	I0717 22:39:23.073355   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:23.073874   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:23.073900   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:23.073823   49961 retry.go:31] will retry after 726.000399ms: waiting for machine to come up
	I0717 22:39:23.801764   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:23.802402   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:23.802435   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:23.802346   49961 retry.go:31] will retry after 1.139144747s: waiting for machine to come up
	I0717 22:39:24.942922   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:24.943399   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:24.943434   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:24.943345   49961 retry.go:31] will retry after 901.42079ms: waiting for machine to come up
	I0717 22:39:25.847031   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:25.847672   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:25.847705   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:25.847615   49961 retry.go:31] will retry after 1.564454196s: waiting for machine to come up
	I0717 22:39:27.413377   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:27.413905   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:27.413934   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:27.413867   49961 retry.go:31] will retry after 1.870233888s: waiting for machine to come up
	I0717 22:39:29.286619   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:29.287114   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:29.287141   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:29.287053   49961 retry.go:31] will retry after 1.890104452s: waiting for machine to come up
	I0717 22:39:31.178426   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:31.178931   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:31.178954   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:31.178878   49961 retry.go:31] will retry after 2.700107595s: waiting for machine to come up
	I0717 22:39:33.880549   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:33.881145   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:33.881176   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:33.881083   49961 retry.go:31] will retry after 3.549333795s: waiting for machine to come up
	I0717 22:39:37.433126   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:37.433592   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:37.433623   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:37.433546   49961 retry.go:31] will retry after 4.203689806s: waiting for machine to come up
	I0717 22:39:41.638782   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:41.639237   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:41.639262   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:41.639189   49961 retry.go:31] will retry after 6.943485529s: waiting for machine to come up
	I0717 22:39:48.584935   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:48.585396   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | unable to find current IP address of domain stopped-upgrade-132802 in network minikube-net
	I0717 22:39:48.585425   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | I0717 22:39:48.585335   49961 retry.go:31] will retry after 8.77231903s: waiting for machine to come up
	I0717 22:39:57.359868   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.360518   49673 main.go:141] libmachine: (stopped-upgrade-132802) Found IP for machine: 192.168.50.42
	I0717 22:39:57.360555   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has current primary IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.360565   49673 main.go:141] libmachine: (stopped-upgrade-132802) Reserving static IP address...
	I0717 22:39:57.361066   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "stopped-upgrade-132802", mac: "52:54:00:03:0c:23", ip: "192.168.50.42"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:39:57.361102   49673 main.go:141] libmachine: (stopped-upgrade-132802) Reserved static IP address: 192.168.50.42
	I0717 22:39:57.361119   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-132802", mac: "52:54:00:03:0c:23", ip: "192.168.50.42"}
	I0717 22:39:57.361138   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | Getting to WaitForSSH function...
	I0717 22:39:57.361155   49673 main.go:141] libmachine: (stopped-upgrade-132802) Waiting for SSH to be available...
	I0717 22:39:57.363290   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.363685   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:39:57.363713   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.363835   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | Using SSH client type: external
	I0717 22:39:57.363856   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/stopped-upgrade-132802/id_rsa (-rw-------)
	I0717 22:39:57.363877   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.42 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/stopped-upgrade-132802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:39:57.363916   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | About to run SSH command:
	I0717 22:39:57.363933   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | exit 0
	I0717 22:39:57.489053   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | SSH cmd err, output: <nil>: 
	I0717 22:39:57.489395   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetConfigRaw
	I0717 22:39:57.490054   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetIP
	I0717 22:39:57.492483   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.492822   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:39:57.492883   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.493111   49673 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/stopped-upgrade-132802/config.json ...
	I0717 22:39:57.493309   49673 machine.go:88] provisioning docker machine ...
	I0717 22:39:57.493334   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .DriverName
	I0717 22:39:57.493569   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetMachineName
	I0717 22:39:57.493749   49673 buildroot.go:166] provisioning hostname "stopped-upgrade-132802"
	I0717 22:39:57.493769   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetMachineName
	I0717 22:39:57.493907   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHHostname
	I0717 22:39:57.496205   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.496547   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:39:57.496572   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.496710   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHPort
	I0717 22:39:57.496854   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHKeyPath
	I0717 22:39:57.497013   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHKeyPath
	I0717 22:39:57.497171   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHUsername
	I0717 22:39:57.497321   49673 main.go:141] libmachine: Using SSH client type: native
	I0717 22:39:57.497774   49673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0717 22:39:57.497792   49673 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-132802 && echo "stopped-upgrade-132802" | sudo tee /etc/hostname
	I0717 22:39:57.616696   49673 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-132802
	
	I0717 22:39:57.616737   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHHostname
	I0717 22:39:57.619617   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.620032   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:39:57.620072   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.620242   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHPort
	I0717 22:39:57.620434   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHKeyPath
	I0717 22:39:57.620583   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHKeyPath
	I0717 22:39:57.620712   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHUsername
	I0717 22:39:57.620867   49673 main.go:141] libmachine: Using SSH client type: native
	I0717 22:39:57.621334   49673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0717 22:39:57.621354   49673 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-132802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-132802/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-132802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:39:57.738045   49673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:39:57.738068   49673 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:39:57.738116   49673 buildroot.go:174] setting up certificates
	I0717 22:39:57.738127   49673 provision.go:83] configureAuth start
	I0717 22:39:57.738139   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetMachineName
	I0717 22:39:57.738445   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetIP
	I0717 22:39:57.741051   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.741400   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:39:57.741427   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.741599   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHHostname
	I0717 22:39:57.743885   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.744241   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:39:57.744277   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.744409   49673 provision.go:138] copyHostCerts
	I0717 22:39:57.744460   49673 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:39:57.744470   49673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:39:57.744554   49673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:39:57.744692   49673 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:39:57.744703   49673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:39:57.744736   49673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:39:57.744801   49673 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:39:57.744807   49673 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:39:57.744827   49673 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:39:57.744880   49673 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-132802 san=[192.168.50.42 192.168.50.42 localhost 127.0.0.1 minikube stopped-upgrade-132802]
	I0717 22:39:57.965161   49673 provision.go:172] copyRemoteCerts
	I0717 22:39:57.965248   49673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:39:57.965277   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHHostname
	I0717 22:39:57.967991   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.968390   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:39:57.968424   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:57.968608   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHPort
	I0717 22:39:57.968813   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHKeyPath
	I0717 22:39:57.968962   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHUsername
	I0717 22:39:57.969124   49673 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/stopped-upgrade-132802/id_rsa Username:docker}
	I0717 22:39:58.052542   49673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 22:39:58.066725   49673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:39:58.080212   49673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:39:58.093109   49673 provision.go:86] duration metric: configureAuth took 354.968573ms
	I0717 22:39:58.093137   49673 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:39:58.093350   49673 config.go:182] Loaded profile config "stopped-upgrade-132802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0717 22:39:58.093438   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHHostname
	I0717 22:39:58.096144   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:58.096658   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:39:58.096683   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:39:58.097015   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHPort
	I0717 22:39:58.097252   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHKeyPath
	I0717 22:39:58.097412   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHKeyPath
	I0717 22:39:58.097584   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHUsername
	I0717 22:39:58.097782   49673 main.go:141] libmachine: Using SSH client type: native
	I0717 22:39:58.098185   49673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0717 22:39:58.098201   49673 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:40:01.131388   49673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:40:01.131416   49673 machine.go:91] provisioned docker machine in 3.638090237s
	I0717 22:40:01.131427   49673 start.go:300] post-start starting for "stopped-upgrade-132802" (driver="kvm2")
	I0717 22:40:01.131436   49673 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:40:01.131452   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .DriverName
	I0717 22:40:01.131782   49673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:40:01.131819   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHHostname
	I0717 22:40:01.134519   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:40:01.134831   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:40:01.134884   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:40:01.135031   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHPort
	I0717 22:40:01.135233   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHKeyPath
	I0717 22:40:01.135401   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHUsername
	I0717 22:40:01.135525   49673 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/stopped-upgrade-132802/id_rsa Username:docker}
	I0717 22:40:01.220551   49673 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:40:01.224829   49673 info.go:137] Remote host: Buildroot 2019.02.7
	I0717 22:40:01.224857   49673 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:40:01.224936   49673 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:40:01.225026   49673 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:40:01.225143   49673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:40:01.230943   49673 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:40:01.245856   49673 start.go:303] post-start completed in 114.4152ms
	I0717 22:40:01.245880   49673 fix.go:56] fixHost completed within 41.042992066s
	I0717 22:40:01.245905   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHHostname
	I0717 22:40:01.248517   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:40:01.248959   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:40:01.248992   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:40:01.249287   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHPort
	I0717 22:40:01.249513   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHKeyPath
	I0717 22:40:01.249718   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHKeyPath
	I0717 22:40:01.249887   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHUsername
	I0717 22:40:01.250073   49673 main.go:141] libmachine: Using SSH client type: native
	I0717 22:40:01.250667   49673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I0717 22:40:01.250685   49673 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 22:40:01.362180   49673 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689633601.296823337
	
	I0717 22:40:01.362207   49673 fix.go:206] guest clock: 1689633601.296823337
	I0717 22:40:01.362217   49673 fix.go:219] Guest: 2023-07-17 22:40:01.296823337 +0000 UTC Remote: 2023-07-17 22:40:01.245884863 +0000 UTC m=+68.269036917 (delta=50.938474ms)
	I0717 22:40:01.362253   49673 fix.go:190] guest clock delta is within tolerance: 50.938474ms
	I0717 22:40:01.362263   49673 start.go:83] releasing machines lock for "stopped-upgrade-132802", held for 41.159397528s
	I0717 22:40:01.362294   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .DriverName
	I0717 22:40:01.362538   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetIP
	I0717 22:40:01.365392   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:40:01.365837   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:40:01.365870   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:40:01.366031   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .DriverName
	I0717 22:40:01.366606   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .DriverName
	I0717 22:40:01.366797   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .DriverName
	I0717 22:40:01.366889   49673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:40:01.366938   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHHostname
	I0717 22:40:01.366989   49673 ssh_runner.go:195] Run: cat /version.json
	I0717 22:40:01.367010   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHHostname
	I0717 22:40:01.369894   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:40:01.370146   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:40:01.370471   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:40:01.370496   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:40:01.370803   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:0c:23", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-07-17 23:39:47 +0000 UTC Type:0 Mac:52:54:00:03:0c:23 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-132802 Clientid:01:52:54:00:03:0c:23}
	I0717 22:40:01.370828   49673 main.go:141] libmachine: (stopped-upgrade-132802) DBG | domain stopped-upgrade-132802 has defined IP address 192.168.50.42 and MAC address 52:54:00:03:0c:23 in network minikube-net
	I0717 22:40:01.370996   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHPort
	I0717 22:40:01.371141   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHPort
	I0717 22:40:01.371233   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHKeyPath
	I0717 22:40:01.371337   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHKeyPath
	I0717 22:40:01.371399   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHUsername
	I0717 22:40:01.371461   49673 main.go:141] libmachine: (stopped-upgrade-132802) Calling .GetSSHUsername
	I0717 22:40:01.371574   49673 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/stopped-upgrade-132802/id_rsa Username:docker}
	I0717 22:40:01.371605   49673 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/stopped-upgrade-132802/id_rsa Username:docker}
	W0717 22:40:01.455480   49673 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 22:40:01.455539   49673 ssh_runner.go:195] Run: systemctl --version
	I0717 22:40:01.473351   49673 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:40:01.668070   49673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:40:01.674153   49673 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:40:01.674230   49673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:40:01.679455   49673 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 22:40:01.679482   49673 start.go:466] detecting cgroup driver to use...
	I0717 22:40:01.679568   49673 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:40:01.689864   49673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:40:01.699938   49673 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:40:01.700007   49673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:40:01.709824   49673 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:40:01.718488   49673 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 22:40:01.726990   49673 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 22:40:01.727054   49673 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:40:01.813202   49673 docker.go:212] disabling docker service ...
	I0717 22:40:01.813256   49673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:40:01.824255   49673 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:40:01.832771   49673 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:40:01.925623   49673 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:40:02.018828   49673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:40:02.028555   49673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:40:02.042062   49673 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0717 22:40:02.042139   49673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:40:02.052069   49673 out.go:177] 
	W0717 22:40:02.053797   49673 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0717 22:40:02.053821   49673 out.go:239] * 
	* 
	W0717 22:40:02.054900   49673 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 22:40:02.057414   49673 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-132802 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (304.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (131.59s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-482945 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-482945 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m6.679659466s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-482945] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-482945 in cluster pause-482945
	* Updating the running kvm2 "pause-482945" VM ...
	* Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-482945" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:39:56.696723   50275 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:39:56.696879   50275 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:39:56.696891   50275 out.go:309] Setting ErrFile to fd 2...
	I0717 22:39:56.696898   50275 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:39:56.697192   50275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:39:56.697995   50275 out.go:303] Setting JSON to false
	I0717 22:39:56.699117   50275 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8549,"bootTime":1689625048,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:39:56.699172   50275 start.go:138] virtualization: kvm guest
	I0717 22:39:56.701612   50275 out.go:177] * [pause-482945] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:39:56.703236   50275 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:39:56.703183   50275 notify.go:220] Checking for updates...
	I0717 22:39:56.704945   50275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:39:56.706473   50275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:39:56.707976   50275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:39:56.709555   50275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:39:56.710954   50275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:39:56.713876   50275 config.go:182] Loaded profile config "pause-482945": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:39:56.714296   50275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:39:56.714353   50275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:39:56.728949   50275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34361
	I0717 22:39:56.729410   50275 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:39:56.730051   50275 main.go:141] libmachine: Using API Version  1
	I0717 22:39:56.730077   50275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:39:56.730427   50275 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:39:56.730613   50275 main.go:141] libmachine: (pause-482945) Calling .DriverName
	I0717 22:39:56.730848   50275 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:39:56.731121   50275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:39:56.731153   50275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:39:56.745120   50275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0717 22:39:56.745566   50275 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:39:56.746035   50275 main.go:141] libmachine: Using API Version  1
	I0717 22:39:56.746056   50275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:39:56.746375   50275 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:39:56.746542   50275 main.go:141] libmachine: (pause-482945) Calling .DriverName
	I0717 22:39:56.779748   50275 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 22:39:56.781224   50275 start.go:298] selected driver: kvm2
	I0717 22:39:56.781242   50275 start.go:880] validating driver "kvm2" against &{Name:pause-482945 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-482945 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.117 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:39:56.781427   50275 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:39:56.781778   50275 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:39:56.781868   50275 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 22:39:56.797217   50275 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 22:39:56.797948   50275 cni.go:84] Creating CNI manager for ""
	I0717 22:39:56.797974   50275 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:39:56.797985   50275 start_flags.go:319] config:
	{Name:pause-482945 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-482945 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.117 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:fal
se storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:39:56.798239   50275 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:39:56.801361   50275 out.go:177] * Starting control plane node pause-482945 in cluster pause-482945
	I0717 22:39:56.802840   50275 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:39:56.802886   50275 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 22:39:56.802898   50275 cache.go:57] Caching tarball of preloaded images
	I0717 22:39:56.802989   50275 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 22:39:56.803000   50275 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 22:39:56.803135   50275 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/pause-482945/config.json ...
	I0717 22:39:56.803346   50275 start.go:365] acquiring machines lock for pause-482945: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:40:25.002339   50275 start.go:369] acquired machines lock for "pause-482945" in 28.198960259s
	I0717 22:40:25.002390   50275 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:40:25.002398   50275 fix.go:54] fixHost starting: 
	I0717 22:40:25.002794   50275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:40:25.002844   50275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:40:25.023197   50275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0717 22:40:25.023603   50275 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:40:25.024147   50275 main.go:141] libmachine: Using API Version  1
	I0717 22:40:25.024173   50275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:40:25.024536   50275 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:40:25.024736   50275 main.go:141] libmachine: (pause-482945) Calling .DriverName
	I0717 22:40:25.024893   50275 main.go:141] libmachine: (pause-482945) Calling .GetState
	I0717 22:40:25.026581   50275 fix.go:102] recreateIfNeeded on pause-482945: state=Running err=<nil>
	W0717 22:40:25.026618   50275 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:40:25.028866   50275 out.go:177] * Updating the running kvm2 "pause-482945" VM ...
	I0717 22:40:25.030431   50275 machine.go:88] provisioning docker machine ...
	I0717 22:40:25.030461   50275 main.go:141] libmachine: (pause-482945) Calling .DriverName
	I0717 22:40:25.030643   50275 main.go:141] libmachine: (pause-482945) Calling .GetMachineName
	I0717 22:40:25.030796   50275 buildroot.go:166] provisioning hostname "pause-482945"
	I0717 22:40:25.030818   50275 main.go:141] libmachine: (pause-482945) Calling .GetMachineName
	I0717 22:40:25.030978   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHHostname
	I0717 22:40:25.033837   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:25.034279   50275 main.go:141] libmachine: (pause-482945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4f:9e", ip: ""} in network mk-pause-482945: {Iface:virbr4 ExpiryTime:2023-07-17 23:39:12 +0000 UTC Type:0 Mac:52:54:00:b2:4f:9e Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:pause-482945 Clientid:01:52:54:00:b2:4f:9e}
	I0717 22:40:25.034312   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined IP address 192.168.61.117 and MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:25.034501   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHPort
	I0717 22:40:25.034703   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHKeyPath
	I0717 22:40:25.034892   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHKeyPath
	I0717 22:40:25.035042   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHUsername
	I0717 22:40:25.035228   50275 main.go:141] libmachine: Using SSH client type: native
	I0717 22:40:25.035885   50275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I0717 22:40:25.035911   50275 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-482945 && echo "pause-482945" | sudo tee /etc/hostname
	I0717 22:40:25.188914   50275 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-482945
	
	I0717 22:40:25.188951   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHHostname
	I0717 22:40:25.191645   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:25.191993   50275 main.go:141] libmachine: (pause-482945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4f:9e", ip: ""} in network mk-pause-482945: {Iface:virbr4 ExpiryTime:2023-07-17 23:39:12 +0000 UTC Type:0 Mac:52:54:00:b2:4f:9e Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:pause-482945 Clientid:01:52:54:00:b2:4f:9e}
	I0717 22:40:25.192018   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined IP address 192.168.61.117 and MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:25.192218   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHPort
	I0717 22:40:25.192429   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHKeyPath
	I0717 22:40:25.192598   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHKeyPath
	I0717 22:40:25.192781   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHUsername
	I0717 22:40:25.192949   50275 main.go:141] libmachine: Using SSH client type: native
	I0717 22:40:25.193357   50275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I0717 22:40:25.193385   50275 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-482945' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-482945/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-482945' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:40:25.326738   50275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:40:25.326770   50275 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:40:25.326793   50275 buildroot.go:174] setting up certificates
	I0717 22:40:25.326804   50275 provision.go:83] configureAuth start
	I0717 22:40:25.326815   50275 main.go:141] libmachine: (pause-482945) Calling .GetMachineName
	I0717 22:40:25.327115   50275 main.go:141] libmachine: (pause-482945) Calling .GetIP
	I0717 22:40:25.329788   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:25.330151   50275 main.go:141] libmachine: (pause-482945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4f:9e", ip: ""} in network mk-pause-482945: {Iface:virbr4 ExpiryTime:2023-07-17 23:39:12 +0000 UTC Type:0 Mac:52:54:00:b2:4f:9e Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:pause-482945 Clientid:01:52:54:00:b2:4f:9e}
	I0717 22:40:25.330189   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined IP address 192.168.61.117 and MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:25.330334   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHHostname
	I0717 22:40:25.332893   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:25.333263   50275 main.go:141] libmachine: (pause-482945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4f:9e", ip: ""} in network mk-pause-482945: {Iface:virbr4 ExpiryTime:2023-07-17 23:39:12 +0000 UTC Type:0 Mac:52:54:00:b2:4f:9e Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:pause-482945 Clientid:01:52:54:00:b2:4f:9e}
	I0717 22:40:25.333305   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined IP address 192.168.61.117 and MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:25.333463   50275 provision.go:138] copyHostCerts
	I0717 22:40:25.333551   50275 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:40:25.333563   50275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:40:25.333627   50275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:40:25.333761   50275 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:40:25.333770   50275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:40:25.333803   50275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:40:25.333878   50275 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:40:25.333888   50275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:40:25.333914   50275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:40:25.333970   50275 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.pause-482945 san=[192.168.61.117 192.168.61.117 localhost 127.0.0.1 minikube pause-482945]
	I0717 22:40:25.388248   50275 provision.go:172] copyRemoteCerts
	I0717 22:40:25.388350   50275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:40:25.388381   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHHostname
	I0717 22:40:25.391041   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:25.391363   50275 main.go:141] libmachine: (pause-482945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4f:9e", ip: ""} in network mk-pause-482945: {Iface:virbr4 ExpiryTime:2023-07-17 23:39:12 +0000 UTC Type:0 Mac:52:54:00:b2:4f:9e Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:pause-482945 Clientid:01:52:54:00:b2:4f:9e}
	I0717 22:40:25.391406   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined IP address 192.168.61.117 and MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:25.391576   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHPort
	I0717 22:40:25.391776   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHKeyPath
	I0717 22:40:25.391965   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHUsername
	I0717 22:40:25.392148   50275 sshutil.go:53] new ssh client: &{IP:192.168.61.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/pause-482945/id_rsa Username:docker}
	I0717 22:40:25.492668   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:40:25.525187   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0717 22:40:25.552412   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:40:25.580693   50275 provision.go:86] duration metric: configureAuth took 253.874563ms
	I0717 22:40:25.580725   50275 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:40:25.581004   50275 config.go:182] Loaded profile config "pause-482945": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:40:25.581080   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHHostname
	I0717 22:40:25.583629   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:25.584004   50275 main.go:141] libmachine: (pause-482945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4f:9e", ip: ""} in network mk-pause-482945: {Iface:virbr4 ExpiryTime:2023-07-17 23:39:12 +0000 UTC Type:0 Mac:52:54:00:b2:4f:9e Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:pause-482945 Clientid:01:52:54:00:b2:4f:9e}
	I0717 22:40:25.584037   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined IP address 192.168.61.117 and MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:25.584240   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHPort
	I0717 22:40:25.584435   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHKeyPath
	I0717 22:40:25.584606   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHKeyPath
	I0717 22:40:25.584748   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHUsername
	I0717 22:40:25.584953   50275 main.go:141] libmachine: Using SSH client type: native
	I0717 22:40:25.585331   50275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I0717 22:40:25.585347   50275 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:40:33.277060   50275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:40:33.277085   50275 machine.go:91] provisioned docker machine in 8.246635262s
	I0717 22:40:33.277094   50275 start.go:300] post-start starting for "pause-482945" (driver="kvm2")
	I0717 22:40:33.277101   50275 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:40:33.277137   50275 main.go:141] libmachine: (pause-482945) Calling .DriverName
	I0717 22:40:33.277589   50275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:40:33.277617   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHHostname
	I0717 22:40:33.280193   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:33.280604   50275 main.go:141] libmachine: (pause-482945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4f:9e", ip: ""} in network mk-pause-482945: {Iface:virbr4 ExpiryTime:2023-07-17 23:39:12 +0000 UTC Type:0 Mac:52:54:00:b2:4f:9e Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:pause-482945 Clientid:01:52:54:00:b2:4f:9e}
	I0717 22:40:33.280633   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined IP address 192.168.61.117 and MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:33.280820   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHPort
	I0717 22:40:33.280996   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHKeyPath
	I0717 22:40:33.281160   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHUsername
	I0717 22:40:33.281313   50275 sshutil.go:53] new ssh client: &{IP:192.168.61.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/pause-482945/id_rsa Username:docker}
	I0717 22:40:33.982444   50275 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:40:33.993326   50275 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:40:33.993356   50275 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:40:33.993446   50275 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:40:33.993575   50275 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:40:33.993700   50275 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:40:34.010609   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:40:34.064861   50275 start.go:303] post-start completed in 787.755686ms
	I0717 22:40:34.064884   50275 fix.go:56] fixHost completed within 9.062486941s
	I0717 22:40:34.064911   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHHostname
	I0717 22:40:34.067872   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:34.068294   50275 main.go:141] libmachine: (pause-482945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4f:9e", ip: ""} in network mk-pause-482945: {Iface:virbr4 ExpiryTime:2023-07-17 23:39:12 +0000 UTC Type:0 Mac:52:54:00:b2:4f:9e Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:pause-482945 Clientid:01:52:54:00:b2:4f:9e}
	I0717 22:40:34.068335   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined IP address 192.168.61.117 and MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:34.068488   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHPort
	I0717 22:40:34.068721   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHKeyPath
	I0717 22:40:34.068913   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHKeyPath
	I0717 22:40:34.069046   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHUsername
	I0717 22:40:34.069231   50275 main.go:141] libmachine: Using SSH client type: native
	I0717 22:40:34.069854   50275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.117 22 <nil> <nil>}
	I0717 22:40:34.069874   50275 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 22:40:34.240178   50275 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689633634.236423912
	
	I0717 22:40:34.240202   50275 fix.go:206] guest clock: 1689633634.236423912
	I0717 22:40:34.240213   50275 fix.go:219] Guest: 2023-07-17 22:40:34.236423912 +0000 UTC Remote: 2023-07-17 22:40:34.064890954 +0000 UTC m=+37.403751474 (delta=171.532958ms)
	I0717 22:40:34.240256   50275 fix.go:190] guest clock delta is within tolerance: 171.532958ms
	I0717 22:40:34.240266   50275 start.go:83] releasing machines lock for "pause-482945", held for 9.237896485s
	I0717 22:40:34.240299   50275 main.go:141] libmachine: (pause-482945) Calling .DriverName
	I0717 22:40:34.240573   50275 main.go:141] libmachine: (pause-482945) Calling .GetIP
	I0717 22:40:34.243767   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:34.244129   50275 main.go:141] libmachine: (pause-482945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4f:9e", ip: ""} in network mk-pause-482945: {Iface:virbr4 ExpiryTime:2023-07-17 23:39:12 +0000 UTC Type:0 Mac:52:54:00:b2:4f:9e Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:pause-482945 Clientid:01:52:54:00:b2:4f:9e}
	I0717 22:40:34.244182   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined IP address 192.168.61.117 and MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:34.244302   50275 main.go:141] libmachine: (pause-482945) Calling .DriverName
	I0717 22:40:34.244827   50275 main.go:141] libmachine: (pause-482945) Calling .DriverName
	I0717 22:40:34.244995   50275 main.go:141] libmachine: (pause-482945) Calling .DriverName
	I0717 22:40:34.245087   50275 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:40:34.245135   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHHostname
	I0717 22:40:34.245210   50275 ssh_runner.go:195] Run: cat /version.json
	I0717 22:40:34.245236   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHHostname
	I0717 22:40:34.248451   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:34.249046   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:34.249212   50275 main.go:141] libmachine: (pause-482945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4f:9e", ip: ""} in network mk-pause-482945: {Iface:virbr4 ExpiryTime:2023-07-17 23:39:12 +0000 UTC Type:0 Mac:52:54:00:b2:4f:9e Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:pause-482945 Clientid:01:52:54:00:b2:4f:9e}
	I0717 22:40:34.249242   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined IP address 192.168.61.117 and MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:34.249536   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHPort
	I0717 22:40:34.249725   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHKeyPath
	I0717 22:40:34.249788   50275 main.go:141] libmachine: (pause-482945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4f:9e", ip: ""} in network mk-pause-482945: {Iface:virbr4 ExpiryTime:2023-07-17 23:39:12 +0000 UTC Type:0 Mac:52:54:00:b2:4f:9e Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:pause-482945 Clientid:01:52:54:00:b2:4f:9e}
	I0717 22:40:34.249808   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined IP address 192.168.61.117 and MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:34.249933   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHUsername
	I0717 22:40:34.250070   50275 sshutil.go:53] new ssh client: &{IP:192.168.61.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/pause-482945/id_rsa Username:docker}
	I0717 22:40:34.250797   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHPort
	I0717 22:40:34.250978   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHKeyPath
	I0717 22:40:34.251108   50275 main.go:141] libmachine: (pause-482945) Calling .GetSSHUsername
	I0717 22:40:34.251218   50275 sshutil.go:53] new ssh client: &{IP:192.168.61.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/pause-482945/id_rsa Username:docker}
	I0717 22:40:34.369798   50275 ssh_runner.go:195] Run: systemctl --version
	I0717 22:40:34.405439   50275 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:40:34.582477   50275 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:40:34.594873   50275 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:40:34.594946   50275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:40:34.619218   50275 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 22:40:34.619269   50275 start.go:466] detecting cgroup driver to use...
	I0717 22:40:34.619338   50275 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:40:34.643469   50275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:40:34.667194   50275 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:40:34.667262   50275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:40:34.691848   50275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:40:34.725318   50275 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:40:35.177246   50275 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:40:35.500275   50275 docker.go:212] disabling docker service ...
	I0717 22:40:35.500345   50275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:40:35.543208   50275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:40:35.575762   50275 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:40:35.913353   50275 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:40:36.275773   50275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:40:36.316136   50275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:40:36.363857   50275 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:40:36.363945   50275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:40:36.401828   50275 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:40:36.401916   50275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:40:36.431064   50275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:40:36.461387   50275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:40:36.487283   50275 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:40:36.513123   50275 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:40:36.536457   50275 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:40:36.558814   50275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:40:36.915163   50275 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:40:38.348885   50275 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.433685023s)
	I0717 22:40:38.348913   50275 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:40:38.348965   50275 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:40:38.355447   50275 start.go:534] Will wait 60s for crictl version
	I0717 22:40:38.355507   50275 ssh_runner.go:195] Run: which crictl
	I0717 22:40:38.360562   50275 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:40:38.399184   50275 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:40:38.399290   50275 ssh_runner.go:195] Run: crio --version
	I0717 22:40:38.905255   50275 ssh_runner.go:195] Run: crio --version
	I0717 22:40:39.223195   50275 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:40:39.224927   50275 main.go:141] libmachine: (pause-482945) Calling .GetIP
	I0717 22:40:39.228403   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:39.229008   50275 main.go:141] libmachine: (pause-482945) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:4f:9e", ip: ""} in network mk-pause-482945: {Iface:virbr4 ExpiryTime:2023-07-17 23:39:12 +0000 UTC Type:0 Mac:52:54:00:b2:4f:9e Iaid: IPaddr:192.168.61.117 Prefix:24 Hostname:pause-482945 Clientid:01:52:54:00:b2:4f:9e}
	I0717 22:40:39.229036   50275 main.go:141] libmachine: (pause-482945) DBG | domain pause-482945 has defined IP address 192.168.61.117 and MAC address 52:54:00:b2:4f:9e in network mk-pause-482945
	I0717 22:40:39.229379   50275 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 22:40:39.250566   50275 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:40:39.250641   50275 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:40:39.360059   50275 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:40:39.360086   50275 crio.go:415] Images already preloaded, skipping extraction
	I0717 22:40:39.360144   50275 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:40:39.431999   50275 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:40:39.432028   50275 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:40:39.432115   50275 ssh_runner.go:195] Run: crio config
	I0717 22:40:39.591943   50275 cni.go:84] Creating CNI manager for ""
	I0717 22:40:39.592019   50275 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:40:39.592045   50275 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:40:39.592076   50275 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.117 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-482945 NodeName:pause-482945 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.117"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.117 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:40:39.592293   50275 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.117
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-482945"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.117
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.117"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:40:39.592393   50275 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-482945 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:pause-482945 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:40:39.592474   50275 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:40:39.608851   50275 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:40:39.608973   50275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:40:39.631163   50275 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0717 22:40:39.660523   50275 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:40:39.694935   50275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0717 22:40:39.717646   50275 ssh_runner.go:195] Run: grep 192.168.61.117	control-plane.minikube.internal$ /etc/hosts
	I0717 22:40:39.730336   50275 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/pause-482945 for IP: 192.168.61.117
	I0717 22:40:39.730369   50275 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:40:39.730555   50275 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:40:39.730619   50275 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:40:39.730715   50275 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/pause-482945/client.key
	I0717 22:40:39.730815   50275 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/pause-482945/apiserver.key.1784b7dd
	I0717 22:40:39.730873   50275 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/pause-482945/proxy-client.key
	I0717 22:40:39.731021   50275 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:40:39.731064   50275 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:40:39.731080   50275 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:40:39.731115   50275 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:40:39.731144   50275 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:40:39.731171   50275 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:40:39.731230   50275 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:40:39.732561   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/pause-482945/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:40:39.800025   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/pause-482945/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 22:40:39.845673   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/pause-482945/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:40:39.877670   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/pause-482945/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 22:40:39.915446   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:40:39.953612   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:40:39.998418   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:40:40.061851   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:40:40.127102   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:40:40.176557   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:40:40.223667   50275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:40:40.262484   50275 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:40:40.283282   50275 ssh_runner.go:195] Run: openssl version
	I0717 22:40:40.291484   50275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:40:40.306568   50275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:40:40.313993   50275 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:40:40.314056   50275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:40:40.322928   50275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:40:40.336643   50275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:40:40.350574   50275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:40:40.357123   50275 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:40:40.357204   50275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:40:40.364986   50275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:40:40.376179   50275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:40:40.391352   50275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:40:40.397223   50275 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:40:40.397290   50275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:40:40.406431   50275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:40:40.418007   50275 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:40:40.423094   50275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:40:40.431572   50275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:40:40.440423   50275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:40:40.449043   50275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:40:40.458571   50275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:40:40.466633   50275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:40:40.473860   50275 kubeadm.go:404] StartCluster: {Name:pause-482945 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-482945 Namespace:default APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.117 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registr
y-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:40:40.474007   50275 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:40:40.474060   50275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:40:40.520640   50275 cri.go:89] found id: "8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea"
	I0717 22:40:40.520665   50275 cri.go:89] found id: "804095c977b65cdd3d5332ddf537d3fac5b77d711c0c41afd970be7cdcbc6c7e"
	I0717 22:40:40.520672   50275 cri.go:89] found id: "c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7"
	I0717 22:40:40.520678   50275 cri.go:89] found id: "637ae17833f746b905ca21ca70cf68ecfe1402d60bd40c5fd416ebbe5f570dea"
	I0717 22:40:40.520683   50275 cri.go:89] found id: "31c52fea48ca64a705b77ee0a8d818e4b1b4fc0ddaeeec78980546fc221c4c0c"
	I0717 22:40:40.520689   50275 cri.go:89] found id: "81767a650c3b1a43889dcb478913a2b1105379f2e570f1e502dd7f04c25eba8c"
	I0717 22:40:40.520694   50275 cri.go:89] found id: "edba05cf8cdb48e1378d7136230116cb21be62c757ccfb4cd997d2dd68ff976e"
	I0717 22:40:40.520700   50275 cri.go:89] found id: ""
	I0717 22:40:40.520752   50275 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-482945 -n pause-482945
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-482945 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-482945 logs -n 25: (1.924381849s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-env-939164           | force-systemd-env-939164  | jenkins | v1.31.0 | 17 Jul 23 22:37 UTC | 17 Jul 23 22:38 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-730116             | running-upgrade-730116    | jenkins | v1.31.0 | 17 Jul 23 22:37 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-201894 ssh cat     | force-systemd-flag-201894 | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:38 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-201894          | force-systemd-flag-201894 | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:38 UTC |
	| start   | -p cert-expiration-366864             | cert-expiration-366864    | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:38 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-730116             | running-upgrade-730116    | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:38 UTC |
	| start   | -p cert-options-259016                | cert-options-259016       | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:39 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-939164           | force-systemd-env-939164  | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:38 UTC |
	| start   | -p pause-482945 --memory=2048         | pause-482945              | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:39 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-132802             | stopped-upgrade-132802    | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-259016 ssh               | cert-options-259016       | jenkins | v1.31.0 | 17 Jul 23 22:39 UTC | 17 Jul 23 22:39 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-259016 -- sudo        | cert-options-259016       | jenkins | v1.31.0 | 17 Jul 23 22:39 UTC | 17 Jul 23 22:39 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-259016                | cert-options-259016       | jenkins | v1.31.0 | 17 Jul 23 22:39 UTC | 17 Jul 23 22:39 UTC |
	| start   | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:39 UTC |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:39 UTC | 17 Jul 23 22:40 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-482945                       | pause-482945              | jenkins | v1.31.0 | 17 Jul 23 22:39 UTC | 17 Jul 23 22:42 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-132802             | stopped-upgrade-132802    | jenkins | v1.31.0 | 17 Jul 23 22:40 UTC | 17 Jul 23 22:40 UTC |
	| start   | -p old-k8s-version-332820             | old-k8s-version-332820    | jenkins | v1.31.0 | 17 Jul 23 22:40 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:40 UTC | 17 Jul 23 22:41 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:41 UTC | 17 Jul 23 22:41 UTC |
	| start   | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:41 UTC | 17 Jul 23 22:41 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-431736 sudo           | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:41 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| start   | -p cert-expiration-366864             | cert-expiration-366864    | jenkins | v1.31.0 | 17 Jul 23 22:41 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| start   | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:42:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:42:02.325737   51650 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:42:02.325848   51650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:42:02.325851   51650 out.go:309] Setting ErrFile to fd 2...
	I0717 22:42:02.325855   51650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:42:02.326099   51650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:42:02.326632   51650 out.go:303] Setting JSON to false
	I0717 22:42:02.327611   51650 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8674,"bootTime":1689625048,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:42:02.327664   51650 start.go:138] virtualization: kvm guest
	I0717 22:42:02.330082   51650 out.go:177] * [NoKubernetes-431736] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:42:02.331823   51650 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:42:02.331821   51650 notify.go:220] Checking for updates...
	I0717 22:42:02.333449   51650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:42:02.334993   51650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:42:02.336477   51650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:42:02.338936   51650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:42:02.340664   51650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:42:02.342898   51650 config.go:182] Loaded profile config "NoKubernetes-431736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0717 22:42:02.343373   51650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:42:02.343445   51650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:42:02.360168   51650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0717 22:42:02.360581   51650 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:42:02.361139   51650 main.go:141] libmachine: Using API Version  1
	I0717 22:42:02.361154   51650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:42:02.361602   51650 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:42:02.361792   51650 main.go:141] libmachine: (NoKubernetes-431736) Calling .DriverName
	I0717 22:42:02.362030   51650 start.go:1698] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0717 22:42:02.362054   51650 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:42:02.362318   51650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:42:02.362344   51650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:42:02.377001   51650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
	I0717 22:42:02.377606   51650 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:42:02.378193   51650 main.go:141] libmachine: Using API Version  1
	I0717 22:42:02.378211   51650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:42:02.378508   51650 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:42:02.378701   51650 main.go:141] libmachine: (NoKubernetes-431736) Calling .DriverName
	I0717 22:42:02.423178   51650 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 22:42:02.424675   51650 start.go:298] selected driver: kvm2
	I0717 22:42:02.424684   51650 start.go:880] validating driver "kvm2" against &{Name:NoKubernetes-431736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-
431736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:42:02.424810   51650 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:42:02.425262   51650 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:42:02.425367   51650 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 22:42:02.440851   51650 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 22:42:02.441854   51650 cni.go:84] Creating CNI manager for ""
	I0717 22:42:02.441870   51650 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:42:02.441881   51650 start_flags.go:319] config:
	{Name:NoKubernetes-431736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-431736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0}
	I0717 22:42:02.442073   51650 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:42:02.444062   51650 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-431736
	I0717 22:42:02.445581   51650 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0717 22:42:02.475960   51650 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0717 22:42:02.476133   51650 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/NoKubernetes-431736/config.json ...
	I0717 22:42:02.476451   51650 start.go:365] acquiring machines lock for NoKubernetes-431736: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:42:02.476524   51650 start.go:369] acquired machines lock for "NoKubernetes-431736" in 55.288µs
	I0717 22:42:02.476538   51650 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:42:02.476543   51650 fix.go:54] fixHost starting: 
	I0717 22:42:02.476948   51650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:42:02.476985   51650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:42:02.492319   51650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45869
	I0717 22:42:02.492711   51650 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:42:02.493175   51650 main.go:141] libmachine: Using API Version  1
	I0717 22:42:02.493187   51650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:42:02.493456   51650 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:42:02.493661   51650 main.go:141] libmachine: (NoKubernetes-431736) Calling .DriverName
	I0717 22:42:02.493814   51650 main.go:141] libmachine: (NoKubernetes-431736) Calling .GetState
	I0717 22:42:02.495567   51650 fix.go:102] recreateIfNeeded on NoKubernetes-431736: state=Stopped err=<nil>
	I0717 22:42:02.495600   51650 main.go:141] libmachine: (NoKubernetes-431736) Calling .DriverName
	W0717 22:42:02.495775   51650 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:42:02.497834   51650 out.go:177] * Restarting existing kvm2 VM for "NoKubernetes-431736" ...
	I0717 22:42:01.475621   51523 main.go:141] libmachine: (cert-expiration-366864) Calling .GetIP
	I0717 22:42:01.478407   51523 main.go:141] libmachine: (cert-expiration-366864) DBG | domain cert-expiration-366864 has defined MAC address 52:54:00:da:15:f3 in network mk-cert-expiration-366864
	I0717 22:42:01.478862   51523 main.go:141] libmachine: (cert-expiration-366864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:15:f3", ip: ""} in network mk-cert-expiration-366864: {Iface:virbr1 ExpiryTime:2023-07-17 23:38:22 +0000 UTC Type:0 Mac:52:54:00:da:15:f3 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:cert-expiration-366864 Clientid:01:52:54:00:da:15:f3}
	I0717 22:42:01.478886   51523 main.go:141] libmachine: (cert-expiration-366864) DBG | domain cert-expiration-366864 has defined IP address 192.168.72.23 and MAC address 52:54:00:da:15:f3 in network mk-cert-expiration-366864
	I0717 22:42:01.479092   51523 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 22:42:01.484163   51523 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:42:01.484247   51523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:42:01.522450   51523 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:42:01.522461   51523 crio.go:415] Images already preloaded, skipping extraction
	I0717 22:42:01.522516   51523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:42:01.556555   51523 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:42:01.556569   51523 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:42:01.556649   51523 ssh_runner.go:195] Run: crio config
	I0717 22:42:01.637104   51523 cni.go:84] Creating CNI manager for ""
	I0717 22:42:01.637126   51523 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:42:01.637138   51523 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:42:01.637159   51523 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.23 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-366864 NodeName:cert-expiration-366864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:42:01.637342   51523 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-366864"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:42:01.637411   51523 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=cert-expiration-366864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:cert-expiration-366864 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:42:01.637460   51523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:42:01.647139   51523 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:42:01.647219   51523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:42:01.656979   51523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0717 22:42:01.675385   51523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:42:01.692333   51523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0717 22:42:01.709832   51523 ssh_runner.go:195] Run: grep 192.168.72.23	control-plane.minikube.internal$ /etc/hosts
	I0717 22:42:01.714303   51523 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864 for IP: 192.168.72.23
	I0717 22:42:01.714329   51523 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:42:01.714519   51523 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:42:01.714572   51523 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:42:01.714699   51523 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/client.key
	W0717 22:42:01.714851   51523 out.go:239] ! Certificate apiserver.crt.a30a8404 has expired. Generating a new one...
	I0717 22:42:01.714879   51523 certs.go:576] cert expired /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt.a30a8404: expiration: 2023-07-17 22:41:37 +0000 UTC, now: 2023-07-17 22:42:01.714873961 +0000 UTC m=+8.601907876
	I0717 22:42:01.715009   51523 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.key.a30a8404
	I0717 22:42:01.715036   51523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt.a30a8404 with IP's: [192.168.72.23 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 22:42:02.033200   51523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt.a30a8404 ...
	I0717 22:42:02.033214   51523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt.a30a8404: {Name:mk8486f495aaa1ce6b522ea4a96e31af79ee387c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:42:02.033370   51523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.key.a30a8404 ...
	I0717 22:42:02.033380   51523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.key.a30a8404: {Name:mk0b56464450e04f557ce8fc512d6f97569baa87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:42:02.033461   51523 certs.go:337] copying /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt.a30a8404 -> /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt
	I0717 22:42:02.033613   51523 certs.go:341] copying /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.key.a30a8404 -> /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.key
	W0717 22:42:02.033771   51523 out.go:239] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0717 22:42:02.033786   51523 certs.go:576] cert expired /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.crt: expiration: 2023-07-17 22:41:37 +0000 UTC, now: 2023-07-17 22:42:02.033783075 +0000 UTC m=+8.920816986
	I0717 22:42:02.033833   51523 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.key
	I0717 22:42:02.033843   51523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.crt with IP's: []
	I0717 22:42:02.094751   51523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.crt ...
	I0717 22:42:02.094766   51523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.crt: {Name:mk751c6ad37ddd4609934e56ab7244c6ca5c8456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:42:02.094882   51523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.key ...
	I0717 22:42:02.094887   51523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.key: {Name:mk9ec81c036c5942501cf1fa4a1b2918f0b99eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:42:02.095032   51523 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:42:02.095059   51523 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:42:02.095070   51523 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:42:02.095089   51523 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:42:02.095106   51523 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:42:02.095126   51523 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:42:02.095168   51523 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:42:02.095711   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:42:02.216228   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:42:02.342939   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:42:02.424238   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:42:02.460166   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:42:02.529598   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:42:02.589779   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:42:02.633835   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:42:02.689499   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:42:02.724279   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:42:02.766029   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:42:02.823507   51523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:42:02.862211   51523 ssh_runner.go:195] Run: openssl version
	I0717 22:42:02.879785   51523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:42:02.906219   51523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:42:02.917280   51523 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:42:02.917349   51523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:42:02.928766   51523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:42:02.943680   51523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:42:02.960916   51523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:42:02.969263   51523 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:42:02.969314   51523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:42:02.979117   51523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:42:02.993085   51523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:42:03.008967   51523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:42:03.018795   51523 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:42:03.018839   51523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:42:03.028111   51523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:42:03.042092   51523 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:42:03.050335   51523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:42:03.060778   51523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:42:03.068781   51523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:42:03.077094   51523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:42:03.085091   51523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:42:03.093482   51523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:42:03.101293   51523 kubeadm.go:404] StartCluster: {Name:cert-expiration-366864 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:cert-expiration-366864 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:42:03.101394   51523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:42:03.101465   51523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:42:02.058163   50275 pod_ready.go:92] pod "kube-proxy-g265v" in "kube-system" namespace has status "Ready":"True"
	I0717 22:42:02.058193   50275 pod_ready.go:81] duration metric: took 404.301309ms waiting for pod "kube-proxy-g265v" in "kube-system" namespace to be "Ready" ...
	I0717 22:42:02.058206   50275 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-482945" in "kube-system" namespace to be "Ready" ...
	I0717 22:42:02.454335   50275 pod_ready.go:92] pod "kube-scheduler-pause-482945" in "kube-system" namespace has status "Ready":"True"
	I0717 22:42:02.454353   50275 pod_ready.go:81] duration metric: took 396.140042ms waiting for pod "kube-scheduler-pause-482945" in "kube-system" namespace to be "Ready" ...
	I0717 22:42:02.454361   50275 pod_ready.go:38] duration metric: took 2.607383279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:42:02.454375   50275 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:42:02.454422   50275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:42:02.467708   50275 api_server.go:72] duration metric: took 2.644473091s to wait for apiserver process to appear ...
	I0717 22:42:02.467732   50275 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:42:02.467754   50275 api_server.go:253] Checking apiserver healthz at https://192.168.61.117:8443/healthz ...
	I0717 22:42:02.475158   50275 api_server.go:279] https://192.168.61.117:8443/healthz returned 200:
	ok
	I0717 22:42:02.476888   50275 api_server.go:141] control plane version: v1.27.3
	I0717 22:42:02.476909   50275 api_server.go:131] duration metric: took 9.170645ms to wait for apiserver health ...
	I0717 22:42:02.476919   50275 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:42:02.657701   50275 system_pods.go:59] 7 kube-system pods found
	I0717 22:42:02.657723   50275 system_pods.go:61] "coredns-5d78c9869d-dk4wn" [d503aa06-1a7d-405f-8a1d-7c97f5901d9c] Running
	I0717 22:42:02.657728   50275 system_pods.go:61] "coredns-5d78c9869d-n5clq" [fcf8c414-139d-4e80-b399-989e458a4a30] Running
	I0717 22:42:02.657733   50275 system_pods.go:61] "etcd-pause-482945" [4ff77c7a-6b11-4010-b007-68fc5955b707] Running
	I0717 22:42:02.657737   50275 system_pods.go:61] "kube-apiserver-pause-482945" [48ceea8f-a971-4cbf-8cd2-94aedf6d3106] Running
	I0717 22:42:02.657741   50275 system_pods.go:61] "kube-controller-manager-pause-482945" [1e1e4675-f2d1-437b-9897-2d21b1402979] Running
	I0717 22:42:02.657745   50275 system_pods.go:61] "kube-proxy-g265v" [161f1f66-5158-437d-b56d-37ff4b108182] Running
	I0717 22:42:02.657748   50275 system_pods.go:61] "kube-scheduler-pause-482945" [5725079e-b6fd-4632-87ee-0128b2c0b84b] Running
	I0717 22:42:02.657754   50275 system_pods.go:74] duration metric: took 180.8303ms to wait for pod list to return data ...
	I0717 22:42:02.657760   50275 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:42:02.855230   50275 default_sa.go:45] found service account: "default"
	I0717 22:42:02.855259   50275 default_sa.go:55] duration metric: took 197.492082ms for default service account to be created ...
	I0717 22:42:02.855269   50275 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:42:03.057448   50275 system_pods.go:86] 7 kube-system pods found
	I0717 22:42:03.057477   50275 system_pods.go:89] "coredns-5d78c9869d-dk4wn" [d503aa06-1a7d-405f-8a1d-7c97f5901d9c] Running
	I0717 22:42:03.057485   50275 system_pods.go:89] "coredns-5d78c9869d-n5clq" [fcf8c414-139d-4e80-b399-989e458a4a30] Running
	I0717 22:42:03.057491   50275 system_pods.go:89] "etcd-pause-482945" [4ff77c7a-6b11-4010-b007-68fc5955b707] Running
	I0717 22:42:03.057497   50275 system_pods.go:89] "kube-apiserver-pause-482945" [48ceea8f-a971-4cbf-8cd2-94aedf6d3106] Running
	I0717 22:42:03.057502   50275 system_pods.go:89] "kube-controller-manager-pause-482945" [1e1e4675-f2d1-437b-9897-2d21b1402979] Running
	I0717 22:42:03.057508   50275 system_pods.go:89] "kube-proxy-g265v" [161f1f66-5158-437d-b56d-37ff4b108182] Running
	I0717 22:42:03.057526   50275 system_pods.go:89] "kube-scheduler-pause-482945" [5725079e-b6fd-4632-87ee-0128b2c0b84b] Running
	I0717 22:42:03.057533   50275 system_pods.go:126] duration metric: took 202.258854ms to wait for k8s-apps to be running ...
	I0717 22:42:03.057542   50275 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:42:03.057592   50275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:42:03.075442   50275 system_svc.go:56] duration metric: took 17.892619ms WaitForService to wait for kubelet.
	I0717 22:42:03.075468   50275 kubeadm.go:581] duration metric: took 3.252235214s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:42:03.075490   50275 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:42:03.254796   50275 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:42:03.254826   50275 node_conditions.go:123] node cpu capacity is 2
	I0717 22:42:03.254838   50275 node_conditions.go:105] duration metric: took 179.342078ms to run NodePressure ...
	I0717 22:42:03.254850   50275 start.go:228] waiting for startup goroutines ...
	I0717 22:42:03.254859   50275 start.go:233] waiting for cluster config update ...
	I0717 22:42:03.254868   50275 start.go:242] writing updated cluster config ...
	I0717 22:42:03.255216   50275 ssh_runner.go:195] Run: rm -f paused
	I0717 22:42:03.318126   50275 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 22:42:03.320135   50275 out.go:177] * Done! kubectl is now configured to use "pause-482945" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 22:39:09 UTC, ends at Mon 2023-07-17 22:42:04 UTC. --
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.138295880Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=2b9d07ba-3d57-4c27-9cfe-00aef01dabe1 name=/runtime.v1.ImageService/ListImages
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.138478566Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a\"" file="storage/storage_transport.go:185"
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.138580909Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f\"" file="storage/storage_transport.go:185"
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.138655094Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a\"" file="storage/storage_transport.go:185"
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.138727013Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c\"" file="storage/storage_transport.go:185"
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.138793814Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" file="storage/storage_transport.go:185"
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.138863663Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" file="storage/storage_transport.go:185"
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.138930831Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" file="storage/storage_transport.go:185"
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.139115049Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562\"" file="storage/storage_transport.go:185"
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.139197800Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da\"" file="storage/storage_transport.go:185"
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.139353870Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,RepoTags:[registry.k8s.io/kube-apiserver:v1.27.3],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0],Size_:122065872,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,RepoTags:[registry.k8s.io/kube-controller-manager:v1.27.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06],Size_:113919286,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:41697ceeb70b3f49e54e
d46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,RepoTags:[registry.k8s.io/kube-scheduler:v1.27.3],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082 registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8],Size_:59811126,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,RepoTags:[registry.k8s.io/kube-proxy:v1.27.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699],Size_:72713623,Uid:nil,Username:,Spec:nil,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34
c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},&Image{Id:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,RepoTags:[registry.k8s.io/etcd:3.5.7-0],RepoDigests:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9],Size_:297083935,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.
io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},&Image{Id:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,RepoTags:[docker.io/kindest/kindnetd:v20230511-dc714da8],RepoDigests:[docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974 docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9],Size_:65249302,Uid:nil,Username:,Spec:nil,},},}" file="go-grpc-middleware/chain.go:25" id=2b9d07ba-3d57-4c27-9cfe-00aef01dabe1 name=/runtime.v1.ImageService/ListImages
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.157899886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c806a4a3-db09-49c8-9e64-6d23cf1f355a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.157967100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c806a4a3-db09-49c8-9e64-6d23cf1f355a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.158367074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706838766953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689633706788693697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86,PodSandboxId:b7b26560eed5dde0ff5750e33522c33f5b537097e79826022dd708495760f8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706806923590,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-dk4wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503aa06-1a7d-405f-8a1d
-7c97f5901d9c,},Annotations:map[string]string{io.kubernetes.container.hash: f65d1a1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,Created
At:1689633699575103623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:168963369890878
7803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a,PodSandboxId:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689633659639272538,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689633659672174382,Labels:map[string]string{io.ku
bernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da,PodSandboxId:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689633655643353499,Labels:map[string]string{io.
kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689633642490840990,Labels:map[string]string{io.kubernetes.container.name
: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59,PodSandboxId:b7b26560eed5dde0ff5750e33522c33f5b537097e79826022dd708495760f8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633642111089525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name:
coredns-5d78c9869d-dk4wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503aa06-1a7d-405f-8a1d-7c97f5901d9c,},Annotations:map[string]string{io.kubernetes.container.hash: f65d1a1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead066
51cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633641564900851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689633640878479989,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea,PodSandboxId:9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697cee
b70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,State:CONTAINER_EXITED,CreatedAt:1689633637082147141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7,PodSandboxId:a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079
a,Annotations:map[string]string{},},ImageRef:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,State:CONTAINER_EXITED,CreatedAt:1689633634813369176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c806a4a3-db09-49c8-9e64-6d23cf1f355a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.171620862Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=192e2836-446c-4fa5-b098-0df28c7ba417 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.171872351Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b7b26560eed5dde0ff5750e33522c33f5b537097e79826022dd708495760f8f2,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-dk4wn,Uid:d503aa06-1a7d-405f-8a1d-7c97f5901d9c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689633638886644950,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-dk4wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503aa06-1a7d-405f-8a1d-7c97f5901d9c,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:39:52.710735935Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-n5clq,Uid:fcf8c414-139d-4e80-b399-989e458a4a30,Namespace:kube-system
,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689633638857635055,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:39:52.659757319Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-482945,Uid:bc8d774132f3e0d505df5afbd8cf90cf,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689633638767780899,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,tier: control-plane,},Annotations:map
[string]string{kubernetes.io/config.hash: bc8d774132f3e0d505df5afbd8cf90cf,kubernetes.io/config.seen: 2023-07-17T22:39:40.598312795Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-482945,Uid:79007c8b63df44d0b74e723ffe8e6a07,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689633638717938162,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 79007c8b63df44d0b74e723ffe8e6a07,kubernetes.io/config.seen: 2023-07-17T22:39:40.598313546Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&PodSandboxMetadata{Name:et
cd-pause-482945,Uid:67feb80efd1440a7d5575d681ff300a1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689633638710622040,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.117:2379,kubernetes.io/config.hash: 67feb80efd1440a7d5575d681ff300a1,kubernetes.io/config.seen: 2023-07-17T22:39:40.598308234Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-482945,Uid:8cb0c3963dd9c9298d8758b4a0d5be12,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689633638556670732,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name:
kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.117:8443,kubernetes.io/config.hash: 8cb0c3963dd9c9298d8758b4a0d5be12,kubernetes.io/config.seen: 2023-07-17T22:39:40.598311776Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&PodSandboxMetadata{Name:kube-proxy-g265v,Uid:161f1f66-5158-437d-b56d-37ff4b108182,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689633638471515004,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/co
nfig.seen: 2023-07-17T22:39:52.242894557Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=192e2836-446c-4fa5-b098-0df28c7ba417 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.172779178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a85b932d-4b63-4e6f-8bf4-359f60a1b5e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.172940658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a85b932d-4b63-4e6f-8bf4-359f60a1b5e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.173259638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706838766953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689633706788693697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86,PodSandboxId:b7b26560eed5dde0ff5750e33522c33f5b537097e79826022dd708495760f8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706806923590,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-dk4wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503aa06-1a7d-405f-8a1d
-7c97f5901d9c,},Annotations:map[string]string{io.kubernetes.container.hash: f65d1a1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,Created
At:1689633699575103623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:168963369890878
7803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a,PodSandboxId:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689633659639272538,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da,PodSandboxId:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689633655643353499,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a85b932d-4b63-4e6f-8bf4-359f60a1b5e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.213883868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b90653d4-06c3-433b-a80e-9853e29461c4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.214053578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b90653d4-06c3-433b-a80e-9853e29461c4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.214475832Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706838766953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689633706788693697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86,PodSandboxId:b7b26560eed5dde0ff5750e33522c33f5b537097e79826022dd708495760f8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706806923590,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-dk4wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503aa06-1a7d-405f-8a1d
-7c97f5901d9c,},Annotations:map[string]string{io.kubernetes.container.hash: f65d1a1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,Created
At:1689633699575103623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:168963369890878
7803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a,PodSandboxId:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689633659639272538,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689633659672174382,Labels:map[string]string{io.ku
bernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da,PodSandboxId:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689633655643353499,Labels:map[string]string{io.
kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689633642490840990,Labels:map[string]string{io.kubernetes.container.name
: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59,PodSandboxId:b7b26560eed5dde0ff5750e33522c33f5b537097e79826022dd708495760f8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633642111089525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name:
coredns-5d78c9869d-dk4wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503aa06-1a7d-405f-8a1d-7c97f5901d9c,},Annotations:map[string]string{io.kubernetes.container.hash: f65d1a1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead066
51cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633641564900851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689633640878479989,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea,PodSandboxId:9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697cee
b70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,State:CONTAINER_EXITED,CreatedAt:1689633637082147141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7,PodSandboxId:a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079
a,Annotations:map[string]string{},},ImageRef:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,State:CONTAINER_EXITED,CreatedAt:1689633634813369176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b90653d4-06c3-433b-a80e-9853e29461c4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.267594756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=44ad0712-97a2-4830-88b2-350e2080ed2c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.267665306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=44ad0712-97a2-4830-88b2-350e2080ed2c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:04 pause-482945 crio[2696]: time="2023-07-17 22:42:04.268080511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706838766953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689633706788693697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86,PodSandboxId:b7b26560eed5dde0ff5750e33522c33f5b537097e79826022dd708495760f8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706806923590,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-dk4wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503aa06-1a7d-405f-8a1d
-7c97f5901d9c,},Annotations:map[string]string{io.kubernetes.container.hash: f65d1a1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,Created
At:1689633699575103623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:168963369890878
7803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a,PodSandboxId:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689633659639272538,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689633659672174382,Labels:map[string]string{io.ku
bernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da,PodSandboxId:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689633655643353499,Labels:map[string]string{io.
kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689633642490840990,Labels:map[string]string{io.kubernetes.container.name
: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59,PodSandboxId:b7b26560eed5dde0ff5750e33522c33f5b537097e79826022dd708495760f8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633642111089525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name:
coredns-5d78c9869d-dk4wn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503aa06-1a7d-405f-8a1d-7c97f5901d9c,},Annotations:map[string]string{io.kubernetes.container.hash: f65d1a1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead066
51cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633641564900851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689633640878479989,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea,PodSandboxId:9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697cee
b70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,State:CONTAINER_EXITED,CreatedAt:1689633637082147141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7,PodSandboxId:a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079
a,Annotations:map[string]string{},},ImageRef:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,State:CONTAINER_EXITED,CreatedAt:1689633634813369176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=44ad0712-97a2-4830-88b2-350e2080ed2c name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	a7b1a3cec3d7f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   17 seconds ago       Running             coredns                   2                   c45166f36f48c
	2bb26acf8425b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   17 seconds ago       Running             coredns                   2                   b7b26560eed5d
	f6a8f9e69f45d       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   17 seconds ago       Running             kube-proxy                2                   31c8cd93d1d4a
	1d2ae72714db8       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   24 seconds ago       Running             kube-controller-manager   3                   f3e38f10f9de5
	ce2ff2a1ecaa4       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   25 seconds ago       Running             etcd                      2                   d7d895f37ef9d
	3ad17ba250549       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   About a minute ago   Exited              kube-controller-manager   2                   f3e38f10f9de5
	7749c0ac83e21       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   About a minute ago   Running             kube-scheduler            2                   5dd7fbee81258
	56c149109b9f6       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   About a minute ago   Running             kube-apiserver            2                   d8b88fe2f3540
	7e3ec1be732d0       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   About a minute ago   Exited              kube-proxy                1                   31c8cd93d1d4a
	06bb567206e63       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   1                   b7b26560eed5d
	d5c8781aa5f31       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   1                   c45166f36f48c
	9f223e08272c4       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   About a minute ago   Exited              etcd                      1                   d7d895f37ef9d
	8d8373a804c69       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   About a minute ago   Exited              kube-scheduler            1                   9f8cf08949e73
	c1ca00541c56d       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   About a minute ago   Exited              kube-apiserver            1                   a0025da2db908
	
	* 
	* ==> coredns [06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46705 - 36383 "HINFO IN 849172439031116182.652538650170333966. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.007112408s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41515 - 44588 "HINFO IN 8017463996740868435.6312161245482969947. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007943355s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40802 - 18718 "HINFO IN 333239259983893566.5303941569989730839. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013104622s
	
	* 
	* ==> coredns [d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41056 - 9930 "HINFO IN 3443356411995585141.4116104925062061114. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00733953s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-482945
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-482945
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=pause-482945
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_39_40_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:39:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-482945
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:41:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:41:47 +0000   Mon, 17 Jul 2023 22:39:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:41:47 +0000   Mon, 17 Jul 2023 22:39:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:41:47 +0000   Mon, 17 Jul 2023 22:39:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:41:47 +0000   Mon, 17 Jul 2023 22:41:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.117
	  Hostname:    pause-482945
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f33221eacfb410c82330fe610b7ef04
	  System UUID:                8f33221e-acfb-410c-8233-0fe610b7ef04
	  Boot ID:                    539e8f43-9b69-43c9-b849-1055e701ed92
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-n5clq                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m13s
	  kube-system                 etcd-pause-482945                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m25s
	  kube-system                 kube-apiserver-pause-482945             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                 kube-controller-manager-pause-482945    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                 kube-proxy-g265v                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 kube-scheduler-pause-482945             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m9s               kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m25s              kubelet          Node pause-482945 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m25s              kubelet          Node pause-482945 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m25s              kubelet          Node pause-482945 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m25s              kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m25s              kubelet          Starting kubelet.
	  Normal  NodeReady                2m24s              kubelet          Node pause-482945 status is now: NodeReady
	  Normal  RegisteredNode           2m14s              node-controller  Node pause-482945 event: Registered Node pause-482945 in Controller
	  Normal  NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27s (x2 over 61s)  kubelet          Node pause-482945 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x2 over 61s)  kubelet          Node pause-482945 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27s (x2 over 61s)  kubelet          Node pause-482945 status is now: NodeHasSufficientMemory
	  Normal  NodeNotReady             24s                kubelet          Node pause-482945 status is now: NodeNotReady
	  Normal  NodeReady                18s                kubelet          Node pause-482945 status is now: NodeReady
	  Normal  RegisteredNode           8s                 node-controller  Node pause-482945 event: Registered Node pause-482945 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071855] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.629841] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.513788] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.175109] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.324413] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.532129] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.125339] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.163833] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.126203] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.268383] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +9.085279] systemd-fstab-generator[928]: Ignoring "noauto" for root device
	[  +9.863013] systemd-fstab-generator[1259]: Ignoring "noauto" for root device
	[Jul17 22:40] kauditd_printk_skb: 28 callbacks suppressed
	[  +1.479638] systemd-fstab-generator[2404]: Ignoring "noauto" for root device
	[  +0.416051] systemd-fstab-generator[2468]: Ignoring "noauto" for root device
	[  +0.380208] systemd-fstab-generator[2499]: Ignoring "noauto" for root device
	[  +0.348395] systemd-fstab-generator[2510]: Ignoring "noauto" for root device
	[  +0.651388] systemd-fstab-generator[2544]: Ignoring "noauto" for root device
	[  +1.809851] kauditd_printk_skb: 8 callbacks suppressed
	[Jul17 22:41] systemd-fstab-generator[3744]: Ignoring "noauto" for root device
	[ +43.692051] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c] <==
	* {"level":"info","ts":"2023-07-17T22:40:42.191Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T22:40:42.191Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58dbdb76d15f9806","initial-advertise-peer-urls":["https://192.168.61.117:2380"],"listen-peer-urls":["https://192.168.61.117:2380"],"advertise-client-urls":["https://192.168.61.117:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.117:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T22:40:42.191Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T22:40:42.191Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.61.117:2380"}
	{"level":"info","ts":"2023-07-17T22:40:42.191Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.117:2380"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 received MsgPreVoteResp from 58dbdb76d15f9806 at term 2"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 became candidate at term 3"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 received MsgVoteResp from 58dbdb76d15f9806 at term 3"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58dbdb76d15f9806 elected leader 58dbdb76d15f9806 at term 3"}
	{"level":"info","ts":"2023-07-17T22:40:43.630Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:40:43.631Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58dbdb76d15f9806","local-member-attributes":"{Name:pause-482945 ClientURLs:[https://192.168.61.117:2379]}","request-path":"/0/members/58dbdb76d15f9806/attributes","cluster-id":"b6f25112358c5425","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:40:43.631Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:40:43.631Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.117:2379"}
	{"level":"info","ts":"2023-07-17T22:40:43.632Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:40:43.632Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T22:40:43.632Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T22:41:01.280Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-07-17T22:41:01.280Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-482945","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.117:2380"],"advertise-client-urls":["https://192.168.61.117:2379"]}
	{"level":"info","ts":"2023-07-17T22:41:01.368Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"58dbdb76d15f9806","current-leader-member-id":"58dbdb76d15f9806"}
	{"level":"info","ts":"2023-07-17T22:41:01.375Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.61.117:2380"}
	{"level":"info","ts":"2023-07-17T22:41:01.378Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.61.117:2380"}
	{"level":"info","ts":"2023-07-17T22:41:01.378Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-482945","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.117:2380"],"advertise-client-urls":["https://192.168.61.117:2379"]}
	
	* 
	* ==> etcd [ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2] <==
	* {"level":"info","ts":"2023-07-17T22:41:39.497Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:41:39.497Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:41:39.498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 switched to configuration voters=(6402952598602618886)"}
	{"level":"info","ts":"2023-07-17T22:41:39.498Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b6f25112358c5425","local-member-id":"58dbdb76d15f9806","added-peer-id":"58dbdb76d15f9806","added-peer-peer-urls":["https://192.168.61.117:2380"]}
	{"level":"info","ts":"2023-07-17T22:41:39.498Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b6f25112358c5425","local-member-id":"58dbdb76d15f9806","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:41:39.498Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:41:39.499Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T22:41:39.500Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58dbdb76d15f9806","initial-advertise-peer-urls":["https://192.168.61.117:2380"],"listen-peer-urls":["https://192.168.61.117:2380"],"advertise-client-urls":["https://192.168.61.117:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.117:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T22:41:39.500Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T22:41:39.500Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.61.117:2380"}
	{"level":"info","ts":"2023-07-17T22:41:39.500Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.117:2380"}
	{"level":"info","ts":"2023-07-17T22:41:40.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 is starting a new election at term 3"}
	{"level":"info","ts":"2023-07-17T22:41:40.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-07-17T22:41:40.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 received MsgPreVoteResp from 58dbdb76d15f9806 at term 3"}
	{"level":"info","ts":"2023-07-17T22:41:40.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 became candidate at term 4"}
	{"level":"info","ts":"2023-07-17T22:41:40.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 received MsgVoteResp from 58dbdb76d15f9806 at term 4"}
	{"level":"info","ts":"2023-07-17T22:41:40.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 became leader at term 4"}
	{"level":"info","ts":"2023-07-17T22:41:40.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58dbdb76d15f9806 elected leader 58dbdb76d15f9806 at term 4"}
	{"level":"info","ts":"2023-07-17T22:41:40.587Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58dbdb76d15f9806","local-member-attributes":"{Name:pause-482945 ClientURLs:[https://192.168.61.117:2379]}","request-path":"/0/members/58dbdb76d15f9806/attributes","cluster-id":"b6f25112358c5425","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:41:40.587Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:41:40.589Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.117:2379"}
	{"level":"info","ts":"2023-07-17T22:41:40.592Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:41:40.595Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T22:41:40.596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:41:40.596Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  22:42:05 up 3 min,  0 users,  load average: 1.05, 0.65, 0.26
	Linux pause-482945 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da] <==
	* Trace[111396636]: [6.811653063s] [6.811653063s] END
	I0717 22:41:46.606213       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 22:41:47.167337       1 trace.go:219] Trace[1469252900]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:300e2b27-7aed-41d0-b605-da399e1b940d,client:192.168.61.117,protocol:HTTP/2.0,resource:csinodes,scope:resource,url:/apis/storage.k8s.io/v1/csinodes/pause-482945,user-agent:kubelet/v1.27.3 (linux/amd64) kubernetes/25b4e43,verb:GET (17-Jul-2023 22:41:04.101) (total time: 43065ms):
	Trace[1469252900]: ---"About to write a response" 43065ms (22:41:47.166)
	Trace[1469252900]: [43.065391777s] [43.065391777s] END
	I0717 22:41:47.523791       1 trace.go:219] Trace[1433160230]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:fa29b799-86fc-4b87-b821-3c8b1f767ac3,client:192.168.61.117,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.27.3 (linux/amd64) kubernetes/25b4e43,verb:POST (17-Jul-2023 22:41:38.095) (total time: 9428ms):
	Trace[1433160230]: ["Create etcd3" audit-id:fa29b799-86fc-4b87-b821-3c8b1f767ac3,key:/events/default/pause-482945.1772c8dd94399619,type:*core.Event,resource:events 9427ms (22:41:38.096)
	Trace[1433160230]:  ---"TransformToStorage succeeded" 9421ms (22:41:47.518)]
	Trace[1433160230]: [9.428564837s] [9.428564837s] END
	I0717 22:41:47.527193       1 trace.go:219] Trace[752774258]: "GuaranteedUpdate etcd3" audit-id:,key:/ranges/serviceips,type:*core.RangeAllocation,resource:serviceipallocations (17-Jul-2023 22:41:46.629) (total time: 897ms):
	Trace[752774258]: ---"initial value restored" 897ms (22:41:47.527)
	Trace[752774258]: [897.73929ms] [897.73929ms] END
	I0717 22:41:47.527482       1 trace.go:219] Trace[1559249855]: "Create" accept:application/json, */*,audit-id:9c486d77-cb1d-4033-b3a7-a0a0c0d39c79,client:192.168.61.117,protocol:HTTP/2.0,resource:services,scope:resource,url:/api/v1/namespaces/kube-system/services,user-agent:kubeadm/v1.27.3 (linux/amd64) kubernetes/25b4e43,verb:POST (17-Jul-2023 22:41:46.628) (total time: 899ms):
	Trace[1559249855]: ---"Write to database call failed" len:610,err:Service "kube-dns" is invalid: spec.clusterIPs: Invalid value: []string{"10.96.0.10"}: failed to allocate IP 10.96.0.10: provided IP is already allocated 898ms (22:41:47.527)
	Trace[1559249855]: [899.084314ms] [899.084314ms] END
	I0717 22:41:47.561546       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 22:41:47.646932       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 22:41:47.673610       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 22:41:48.019253       1 trace.go:219] Trace[81677459]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:fe8892c6-5195-431c-a4b0-84dc55f09e87,client:192.168.61.117,protocol:HTTP/2.0,resource:events,scope:resource,url:/apis/events.k8s.io/v1/namespaces/default/events,user-agent:kube-proxy/v1.27.3 (linux/amd64) kubernetes/25b4e43,verb:POST (17-Jul-2023 22:41:47.279) (total time: 739ms):
	Trace[81677459]: ["Create etcd3" audit-id:fe8892c6-5195-431c-a4b0-84dc55f09e87,key:/events/default/pause-482945.1772c8e799b2045f,type:*core.Event,resource:events 736ms (22:41:47.283)
	Trace[81677459]:  ---"TransformToStorage succeeded" 733ms (22:41:48.016)]
	Trace[81677459]: [739.369292ms] [739.369292ms] END
	I0717 22:41:57.557498       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 22:41:57.568541       1 controller.go:624] quota admission added evaluator for: endpoints
	I0717 22:41:59.317486       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7] <==
	* 
	* 
	* ==> kube-controller-manager [1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a] <==
	* I0717 22:41:57.540655       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0717 22:41:57.544234       1 shared_informer.go:318] Caches are synced for daemon sets
	I0717 22:41:57.545617       1 shared_informer.go:318] Caches are synced for endpoint
	I0717 22:41:57.547727       1 shared_informer.go:318] Caches are synced for job
	I0717 22:41:57.552140       1 shared_informer.go:318] Caches are synced for disruption
	I0717 22:41:57.554404       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0717 22:41:57.558544       1 shared_informer.go:318] Caches are synced for TTL
	I0717 22:41:57.568821       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0717 22:41:57.595614       1 shared_informer.go:318] Caches are synced for taint
	I0717 22:41:57.596132       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0717 22:41:57.596399       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-482945"
	I0717 22:41:57.596450       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0717 22:41:57.596470       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0717 22:41:57.596485       1 taint_manager.go:211] "Sending events to api server"
	I0717 22:41:57.597186       1 event.go:307] "Event occurred" object="pause-482945" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-482945 event: Registered Node pause-482945 in Controller"
	I0717 22:41:57.602543       1 shared_informer.go:318] Caches are synced for namespace
	I0717 22:41:57.638815       1 shared_informer.go:318] Caches are synced for attach detach
	I0717 22:41:57.693096       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 22:41:57.709866       1 shared_informer.go:318] Caches are synced for persistent volume
	I0717 22:41:57.764886       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 22:41:58.087374       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 22:41:58.087462       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 22:41:58.101929       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 22:41:59.326223       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0717 22:41:59.346063       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-dk4wn"
	
	* 
	* ==> kube-controller-manager [3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f] <==
	* I0717 22:41:01.623077       1 serving.go:348] Generated self-signed cert in-memory
	I0717 22:41:02.205854       1 controllermanager.go:187] "Starting" version="v1.27.3"
	I0717 22:41:02.206084       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:41:02.209148       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0717 22:41:02.210618       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 22:41:02.210939       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 22:41:02.211198       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0717 22:41:14.241652       1 controllermanager.go:233] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]po
ststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	* 
	* ==> kube-proxy [7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e] <==
	* E0717 22:40:42.663212       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-482945": dial tcp 192.168.61.117:8443: connect: connection refused
	E0717 22:40:43.801591       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-482945": dial tcp 192.168.61.117:8443: connect: connection refused
	E0717 22:40:45.982623       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-482945": dial tcp 192.168.61.117:8443: connect: connection refused
	E0717 22:40:50.240713       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-482945": dial tcp 192.168.61.117:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308] <==
	* I0717 22:41:47.183291       1 node.go:141] Successfully retrieved node IP: 192.168.61.117
	I0717 22:41:47.183484       1 server_others.go:110] "Detected node IP" address="192.168.61.117"
	I0717 22:41:47.183576       1 server_others.go:554] "Using iptables proxy"
	I0717 22:41:47.256704       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 22:41:47.256758       1 server_others.go:192] "Using iptables Proxier"
	I0717 22:41:47.256838       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 22:41:47.257843       1 server.go:658] "Version info" version="v1.27.3"
	I0717 22:41:47.257896       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:41:47.259446       1 config.go:188] "Starting service config controller"
	I0717 22:41:47.259518       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 22:41:47.259552       1 config.go:97] "Starting endpoint slice config controller"
	I0717 22:41:47.259556       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 22:41:47.262944       1 config.go:315] "Starting node config controller"
	I0717 22:41:47.263088       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 22:41:47.360414       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 22:41:47.360639       1 shared_informer.go:318] Caches are synced for service config
	I0717 22:41:47.363151       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a] <==
	* I0717 22:41:01.693694       1 serving.go:348] Generated self-signed cert in-memory
	I0717 22:41:43.499395       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 22:41:43.499461       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:41:43.504820       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 22:41:43.504967       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0717 22:41:43.505118       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0717 22:41:43.505226       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 22:41:43.508382       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 22:41:43.508500       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:41:43.508523       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0717 22:41:43.508614       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 22:41:43.605700       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0717 22:41:43.609420       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 22:41:43.609694       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea] <==
	* 
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 22:39:09 UTC, ends at Mon 2023-07-17 22:42:05 UTC. --
	Jul 17 22:42:04 pause-482945 kubelet[3750]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 22:42:04 pause-482945 kubelet[3750]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 22:42:04 pause-482945 kubelet[3750]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.622216    3750 manager.go:1106] Failed to create existing container: /kubepods/burstable/podbc8d774132f3e0d505df5afbd8cf90cf/crio-df80402bed589683bbd5973ac968bd2cf6db69fbc26c80a050d3f5c2dd05ac45: Error finding container df80402bed589683bbd5973ac968bd2cf6db69fbc26c80a050d3f5c2dd05ac45: Status 404 returned error can't find the container with id df80402bed589683bbd5973ac968bd2cf6db69fbc26c80a050d3f5c2dd05ac45
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.623070    3750 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod79007c8b63df44d0b74e723ffe8e6a07/crio-9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3: Error finding container 9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3: Status 404 returned error can't find the container with id 9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.624364    3750 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod8cb0c3963dd9c9298d8758b4a0d5be12/crio-a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da: Error finding container a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da: Status 404 returned error can't find the container with id a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.681127    3750 scope.go:115] "RemoveContainer" containerID="2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.721657    3750 scope.go:115] "RemoveContainer" containerID="06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.735932    3750 scope.go:115] "RemoveContainer" containerID="06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.781714    3750 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8fsl\" (UniqueName: \"kubernetes.io/projected/d503aa06-1a7d-405f-8a1d-7c97f5901d9c-kube-api-access-j8fsl\") pod \"d503aa06-1a7d-405f-8a1d-7c97f5901d9c\" (UID: \"d503aa06-1a7d-405f-8a1d-7c97f5901d9c\") "
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.781767    3750 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d503aa06-1a7d-405f-8a1d-7c97f5901d9c-config-volume\") pod \"d503aa06-1a7d-405f-8a1d-7c97f5901d9c\" (UID: \"d503aa06-1a7d-405f-8a1d-7c97f5901d9c\") "
	Jul 17 22:42:04 pause-482945 kubelet[3750]: W0717 22:42:04.782114    3750 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d503aa06-1a7d-405f-8a1d-7c97f5901d9c/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.782348    3750 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d503aa06-1a7d-405f-8a1d-7c97f5901d9c-config-volume" (OuterVolumeSpecName: "config-volume") pod "d503aa06-1a7d-405f-8a1d-7c97f5901d9c" (UID: "d503aa06-1a7d-405f-8a1d-7c97f5901d9c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.786553    3750 scope.go:115] "RemoveContainer" containerID="2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.787358    3750 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86\": container with ID starting with 2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86 not found: ID does not exist" containerID="2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.787408    3750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86} err="failed to get container status \"2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86\": rpc error: code = NotFound desc = could not find container \"2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86\": container with ID starting with 2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86 not found: ID does not exist"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.787460    3750 scope.go:115] "RemoveContainer" containerID="06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.788258    3750 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59\": container with ID starting with 06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59 not found: ID does not exist" containerID="06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.788330    3750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59} err="failed to get container status \"06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59\": rpc error: code = NotFound desc = could not find container \"06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59\": container with ID starting with 06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59 not found: ID does not exist"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.805447    3750 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d503aa06-1a7d-405f-8a1d-7c97f5901d9c-kube-api-access-j8fsl" (OuterVolumeSpecName: "kube-api-access-j8fsl") pod "d503aa06-1a7d-405f-8a1d-7c97f5901d9c" (UID: "d503aa06-1a7d-405f-8a1d-7c97f5901d9c"). InnerVolumeSpecName "kube-api-access-j8fsl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.820857    3750 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = no such id: '06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59'" containerID="06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.821118    3750 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = no such id: '06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59'" containerID="06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.883432    3750 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j8fsl\" (UniqueName: \"kubernetes.io/projected/d503aa06-1a7d-405f-8a1d-7c97f5901d9c-kube-api-access-j8fsl\") on node \"pause-482945\" DevicePath \"\""
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.883507    3750 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d503aa06-1a7d-405f-8a1d-7c97f5901d9c-config-volume\") on node \"pause-482945\" DevicePath \"\""
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.928745    3750 cadvisor_stats_provider.go:442] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/podd503aa06-1a7d-405f-8a1d-7c97f5901d9c/crio-b7b26560eed5dde0ff5750e33522c33f5b537097e79826022dd708495760f8f2\": RecentStats: unable to find data in memory cache], [\"/kubepods/burstable/podd503aa06-1a7d-405f-8a1d-7c97f5901d9c/crio-conmon-b7b26560eed5dde0ff5750e33522c33f5b537097e79826022dd708495760f8f2\": RecentStats: unable to find data in memory cache]"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-482945 -n pause-482945
helpers_test.go:261: (dbg) Run:  kubectl --context pause-482945 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-482945 -n pause-482945
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-482945 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-482945 logs -n 25: (1.50020737s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-env-939164           | force-systemd-env-939164  | jenkins | v1.31.0 | 17 Jul 23 22:37 UTC | 17 Jul 23 22:38 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-730116             | running-upgrade-730116    | jenkins | v1.31.0 | 17 Jul 23 22:37 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-201894 ssh cat     | force-systemd-flag-201894 | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:38 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-201894          | force-systemd-flag-201894 | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:38 UTC |
	| start   | -p cert-expiration-366864             | cert-expiration-366864    | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:38 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-730116             | running-upgrade-730116    | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:38 UTC |
	| start   | -p cert-options-259016                | cert-options-259016       | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:39 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-939164           | force-systemd-env-939164  | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:38 UTC |
	| start   | -p pause-482945 --memory=2048         | pause-482945              | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC | 17 Jul 23 22:39 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-132802             | stopped-upgrade-132802    | jenkins | v1.31.0 | 17 Jul 23 22:38 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-259016 ssh               | cert-options-259016       | jenkins | v1.31.0 | 17 Jul 23 22:39 UTC | 17 Jul 23 22:39 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-259016 -- sudo        | cert-options-259016       | jenkins | v1.31.0 | 17 Jul 23 22:39 UTC | 17 Jul 23 22:39 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-259016                | cert-options-259016       | jenkins | v1.31.0 | 17 Jul 23 22:39 UTC | 17 Jul 23 22:39 UTC |
	| start   | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:39 UTC |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:39 UTC | 17 Jul 23 22:40 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-482945                       | pause-482945              | jenkins | v1.31.0 | 17 Jul 23 22:39 UTC | 17 Jul 23 22:42 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-132802             | stopped-upgrade-132802    | jenkins | v1.31.0 | 17 Jul 23 22:40 UTC | 17 Jul 23 22:40 UTC |
	| start   | -p old-k8s-version-332820             | old-k8s-version-332820    | jenkins | v1.31.0 | 17 Jul 23 22:40 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:40 UTC | 17 Jul 23 22:41 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:41 UTC | 17 Jul 23 22:41 UTC |
	| start   | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:41 UTC | 17 Jul 23 22:41 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-431736 sudo           | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:41 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| start   | -p cert-expiration-366864             | cert-expiration-366864    | jenkins | v1.31.0 | 17 Jul 23 22:41 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| start   | -p NoKubernetes-431736                | NoKubernetes-431736       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:42:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:42:02.325737   51650 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:42:02.325848   51650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:42:02.325851   51650 out.go:309] Setting ErrFile to fd 2...
	I0717 22:42:02.325855   51650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:42:02.326099   51650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:42:02.326632   51650 out.go:303] Setting JSON to false
	I0717 22:42:02.327611   51650 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8674,"bootTime":1689625048,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:42:02.327664   51650 start.go:138] virtualization: kvm guest
	I0717 22:42:02.330082   51650 out.go:177] * [NoKubernetes-431736] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:42:02.331823   51650 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:42:02.331821   51650 notify.go:220] Checking for updates...
	I0717 22:42:02.333449   51650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:42:02.334993   51650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:42:02.336477   51650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:42:02.338936   51650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:42:02.340664   51650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:42:02.342898   51650 config.go:182] Loaded profile config "NoKubernetes-431736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0717 22:42:02.343373   51650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:42:02.343445   51650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:42:02.360168   51650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0717 22:42:02.360581   51650 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:42:02.361139   51650 main.go:141] libmachine: Using API Version  1
	I0717 22:42:02.361154   51650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:42:02.361602   51650 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:42:02.361792   51650 main.go:141] libmachine: (NoKubernetes-431736) Calling .DriverName
	I0717 22:42:02.362030   51650 start.go:1698] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0717 22:42:02.362054   51650 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:42:02.362318   51650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:42:02.362344   51650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:42:02.377001   51650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
	I0717 22:42:02.377606   51650 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:42:02.378193   51650 main.go:141] libmachine: Using API Version  1
	I0717 22:42:02.378211   51650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:42:02.378508   51650 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:42:02.378701   51650 main.go:141] libmachine: (NoKubernetes-431736) Calling .DriverName
	I0717 22:42:02.423178   51650 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 22:42:02.424675   51650 start.go:298] selected driver: kvm2
	I0717 22:42:02.424684   51650 start.go:880] validating driver "kvm2" against &{Name:NoKubernetes-431736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-
431736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:42:02.424810   51650 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:42:02.425262   51650 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:42:02.425367   51650 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 22:42:02.440851   51650 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 22:42:02.441854   51650 cni.go:84] Creating CNI manager for ""
	I0717 22:42:02.441870   51650 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:42:02.441881   51650 start_flags.go:319] config:
	{Name:NoKubernetes-431736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-431736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.186 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0}
	I0717 22:42:02.442073   51650 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:42:02.444062   51650 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-431736
	I0717 22:42:02.445581   51650 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0717 22:42:02.475960   51650 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0717 22:42:02.476133   51650 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/NoKubernetes-431736/config.json ...
	I0717 22:42:02.476451   51650 start.go:365] acquiring machines lock for NoKubernetes-431736: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:42:02.476524   51650 start.go:369] acquired machines lock for "NoKubernetes-431736" in 55.288µs
	I0717 22:42:02.476538   51650 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:42:02.476543   51650 fix.go:54] fixHost starting: 
	I0717 22:42:02.476948   51650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:42:02.476985   51650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:42:02.492319   51650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45869
	I0717 22:42:02.492711   51650 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:42:02.493175   51650 main.go:141] libmachine: Using API Version  1
	I0717 22:42:02.493187   51650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:42:02.493456   51650 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:42:02.493661   51650 main.go:141] libmachine: (NoKubernetes-431736) Calling .DriverName
	I0717 22:42:02.493814   51650 main.go:141] libmachine: (NoKubernetes-431736) Calling .GetState
	I0717 22:42:02.495567   51650 fix.go:102] recreateIfNeeded on NoKubernetes-431736: state=Stopped err=<nil>
	I0717 22:42:02.495600   51650 main.go:141] libmachine: (NoKubernetes-431736) Calling .DriverName
	W0717 22:42:02.495775   51650 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:42:02.497834   51650 out.go:177] * Restarting existing kvm2 VM for "NoKubernetes-431736" ...
	I0717 22:42:01.475621   51523 main.go:141] libmachine: (cert-expiration-366864) Calling .GetIP
	I0717 22:42:01.478407   51523 main.go:141] libmachine: (cert-expiration-366864) DBG | domain cert-expiration-366864 has defined MAC address 52:54:00:da:15:f3 in network mk-cert-expiration-366864
	I0717 22:42:01.478862   51523 main.go:141] libmachine: (cert-expiration-366864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:15:f3", ip: ""} in network mk-cert-expiration-366864: {Iface:virbr1 ExpiryTime:2023-07-17 23:38:22 +0000 UTC Type:0 Mac:52:54:00:da:15:f3 Iaid: IPaddr:192.168.72.23 Prefix:24 Hostname:cert-expiration-366864 Clientid:01:52:54:00:da:15:f3}
	I0717 22:42:01.478886   51523 main.go:141] libmachine: (cert-expiration-366864) DBG | domain cert-expiration-366864 has defined IP address 192.168.72.23 and MAC address 52:54:00:da:15:f3 in network mk-cert-expiration-366864
	I0717 22:42:01.479092   51523 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 22:42:01.484163   51523 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:42:01.484247   51523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:42:01.522450   51523 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:42:01.522461   51523 crio.go:415] Images already preloaded, skipping extraction
	I0717 22:42:01.522516   51523 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:42:01.556555   51523 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:42:01.556569   51523 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:42:01.556649   51523 ssh_runner.go:195] Run: crio config
	I0717 22:42:01.637104   51523 cni.go:84] Creating CNI manager for ""
	I0717 22:42:01.637126   51523 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:42:01.637138   51523 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:42:01.637159   51523 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.23 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-366864 NodeName:cert-expiration-366864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:42:01.637342   51523 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-366864"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:42:01.637411   51523 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=cert-expiration-366864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:cert-expiration-366864 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:42:01.637460   51523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:42:01.647139   51523 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:42:01.647219   51523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:42:01.656979   51523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0717 22:42:01.675385   51523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:42:01.692333   51523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0717 22:42:01.709832   51523 ssh_runner.go:195] Run: grep 192.168.72.23	control-plane.minikube.internal$ /etc/hosts
	I0717 22:42:01.714303   51523 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864 for IP: 192.168.72.23
	I0717 22:42:01.714329   51523 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:42:01.714519   51523 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:42:01.714572   51523 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:42:01.714699   51523 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/client.key
	W0717 22:42:01.714851   51523 out.go:239] ! Certificate apiserver.crt.a30a8404 has expired. Generating a new one...
	I0717 22:42:01.714879   51523 certs.go:576] cert expired /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt.a30a8404: expiration: 2023-07-17 22:41:37 +0000 UTC, now: 2023-07-17 22:42:01.714873961 +0000 UTC m=+8.601907876
	I0717 22:42:01.715009   51523 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.key.a30a8404
	I0717 22:42:01.715036   51523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt.a30a8404 with IP's: [192.168.72.23 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 22:42:02.033200   51523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt.a30a8404 ...
	I0717 22:42:02.033214   51523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt.a30a8404: {Name:mk8486f495aaa1ce6b522ea4a96e31af79ee387c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:42:02.033370   51523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.key.a30a8404 ...
	I0717 22:42:02.033380   51523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.key.a30a8404: {Name:mk0b56464450e04f557ce8fc512d6f97569baa87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:42:02.033461   51523 certs.go:337] copying /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt.a30a8404 -> /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt
	I0717 22:42:02.033613   51523 certs.go:341] copying /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.key.a30a8404 -> /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.key
	W0717 22:42:02.033771   51523 out.go:239] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0717 22:42:02.033786   51523 certs.go:576] cert expired /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.crt: expiration: 2023-07-17 22:41:37 +0000 UTC, now: 2023-07-17 22:42:02.033783075 +0000 UTC m=+8.920816986
	I0717 22:42:02.033833   51523 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.key
	I0717 22:42:02.033843   51523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.crt with IP's: []
	I0717 22:42:02.094751   51523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.crt ...
	I0717 22:42:02.094766   51523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.crt: {Name:mk751c6ad37ddd4609934e56ab7244c6ca5c8456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:42:02.094882   51523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.key ...
	I0717 22:42:02.094887   51523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.key: {Name:mk9ec81c036c5942501cf1fa4a1b2918f0b99eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:42:02.095032   51523 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:42:02.095059   51523 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:42:02.095070   51523 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:42:02.095089   51523 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:42:02.095106   51523 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:42:02.095126   51523 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:42:02.095168   51523 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:42:02.095711   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:42:02.216228   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:42:02.342939   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:42:02.424238   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/cert-expiration-366864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:42:02.460166   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:42:02.529598   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:42:02.589779   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:42:02.633835   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:42:02.689499   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:42:02.724279   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:42:02.766029   51523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:42:02.823507   51523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:42:02.862211   51523 ssh_runner.go:195] Run: openssl version
	I0717 22:42:02.879785   51523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:42:02.906219   51523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:42:02.917280   51523 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:42:02.917349   51523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:42:02.928766   51523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:42:02.943680   51523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:42:02.960916   51523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:42:02.969263   51523 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:42:02.969314   51523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:42:02.979117   51523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:42:02.993085   51523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:42:03.008967   51523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:42:03.018795   51523 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:42:03.018839   51523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:42:03.028111   51523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:42:03.042092   51523 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:42:03.050335   51523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:42:03.060778   51523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:42:03.068781   51523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:42:03.077094   51523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:42:03.085091   51523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:42:03.093482   51523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:42:03.101293   51523 kubeadm.go:404] StartCluster: {Name:cert-expiration-366864 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:cert-expiration-366864 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.23 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:42:03.101394   51523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:42:03.101465   51523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:42:02.058163   50275 pod_ready.go:92] pod "kube-proxy-g265v" in "kube-system" namespace has status "Ready":"True"
	I0717 22:42:02.058193   50275 pod_ready.go:81] duration metric: took 404.301309ms waiting for pod "kube-proxy-g265v" in "kube-system" namespace to be "Ready" ...
	I0717 22:42:02.058206   50275 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-482945" in "kube-system" namespace to be "Ready" ...
	I0717 22:42:02.454335   50275 pod_ready.go:92] pod "kube-scheduler-pause-482945" in "kube-system" namespace has status "Ready":"True"
	I0717 22:42:02.454353   50275 pod_ready.go:81] duration metric: took 396.140042ms waiting for pod "kube-scheduler-pause-482945" in "kube-system" namespace to be "Ready" ...
	I0717 22:42:02.454361   50275 pod_ready.go:38] duration metric: took 2.607383279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:42:02.454375   50275 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:42:02.454422   50275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:42:02.467708   50275 api_server.go:72] duration metric: took 2.644473091s to wait for apiserver process to appear ...
	I0717 22:42:02.467732   50275 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:42:02.467754   50275 api_server.go:253] Checking apiserver healthz at https://192.168.61.117:8443/healthz ...
	I0717 22:42:02.475158   50275 api_server.go:279] https://192.168.61.117:8443/healthz returned 200:
	ok
	I0717 22:42:02.476888   50275 api_server.go:141] control plane version: v1.27.3
	I0717 22:42:02.476909   50275 api_server.go:131] duration metric: took 9.170645ms to wait for apiserver health ...
	I0717 22:42:02.476919   50275 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:42:02.657701   50275 system_pods.go:59] 7 kube-system pods found
	I0717 22:42:02.657723   50275 system_pods.go:61] "coredns-5d78c9869d-dk4wn" [d503aa06-1a7d-405f-8a1d-7c97f5901d9c] Running
	I0717 22:42:02.657728   50275 system_pods.go:61] "coredns-5d78c9869d-n5clq" [fcf8c414-139d-4e80-b399-989e458a4a30] Running
	I0717 22:42:02.657733   50275 system_pods.go:61] "etcd-pause-482945" [4ff77c7a-6b11-4010-b007-68fc5955b707] Running
	I0717 22:42:02.657737   50275 system_pods.go:61] "kube-apiserver-pause-482945" [48ceea8f-a971-4cbf-8cd2-94aedf6d3106] Running
	I0717 22:42:02.657741   50275 system_pods.go:61] "kube-controller-manager-pause-482945" [1e1e4675-f2d1-437b-9897-2d21b1402979] Running
	I0717 22:42:02.657745   50275 system_pods.go:61] "kube-proxy-g265v" [161f1f66-5158-437d-b56d-37ff4b108182] Running
	I0717 22:42:02.657748   50275 system_pods.go:61] "kube-scheduler-pause-482945" [5725079e-b6fd-4632-87ee-0128b2c0b84b] Running
	I0717 22:42:02.657754   50275 system_pods.go:74] duration metric: took 180.8303ms to wait for pod list to return data ...
	I0717 22:42:02.657760   50275 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:42:02.855230   50275 default_sa.go:45] found service account: "default"
	I0717 22:42:02.855259   50275 default_sa.go:55] duration metric: took 197.492082ms for default service account to be created ...
	I0717 22:42:02.855269   50275 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:42:03.057448   50275 system_pods.go:86] 7 kube-system pods found
	I0717 22:42:03.057477   50275 system_pods.go:89] "coredns-5d78c9869d-dk4wn" [d503aa06-1a7d-405f-8a1d-7c97f5901d9c] Running
	I0717 22:42:03.057485   50275 system_pods.go:89] "coredns-5d78c9869d-n5clq" [fcf8c414-139d-4e80-b399-989e458a4a30] Running
	I0717 22:42:03.057491   50275 system_pods.go:89] "etcd-pause-482945" [4ff77c7a-6b11-4010-b007-68fc5955b707] Running
	I0717 22:42:03.057497   50275 system_pods.go:89] "kube-apiserver-pause-482945" [48ceea8f-a971-4cbf-8cd2-94aedf6d3106] Running
	I0717 22:42:03.057502   50275 system_pods.go:89] "kube-controller-manager-pause-482945" [1e1e4675-f2d1-437b-9897-2d21b1402979] Running
	I0717 22:42:03.057508   50275 system_pods.go:89] "kube-proxy-g265v" [161f1f66-5158-437d-b56d-37ff4b108182] Running
	I0717 22:42:03.057526   50275 system_pods.go:89] "kube-scheduler-pause-482945" [5725079e-b6fd-4632-87ee-0128b2c0b84b] Running
	I0717 22:42:03.057533   50275 system_pods.go:126] duration metric: took 202.258854ms to wait for k8s-apps to be running ...
	I0717 22:42:03.057542   50275 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:42:03.057592   50275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:42:03.075442   50275 system_svc.go:56] duration metric: took 17.892619ms WaitForService to wait for kubelet.
	I0717 22:42:03.075468   50275 kubeadm.go:581] duration metric: took 3.252235214s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:42:03.075490   50275 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:42:03.254796   50275 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:42:03.254826   50275 node_conditions.go:123] node cpu capacity is 2
	I0717 22:42:03.254838   50275 node_conditions.go:105] duration metric: took 179.342078ms to run NodePressure ...
	I0717 22:42:03.254850   50275 start.go:228] waiting for startup goroutines ...
	I0717 22:42:03.254859   50275 start.go:233] waiting for cluster config update ...
	I0717 22:42:03.254868   50275 start.go:242] writing updated cluster config ...
	I0717 22:42:03.255216   50275 ssh_runner.go:195] Run: rm -f paused
	I0717 22:42:03.318126   50275 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 22:42:03.320135   50275 out.go:177] * Done! kubectl is now configured to use "pause-482945" cluster and "default" namespace by default
	I0717 22:42:00.358517   50512 pod_ready.go:102] pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace has status "Ready":"False"
	I0717 22:42:02.360583   50512 pod_ready.go:102] pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 22:39:09 UTC, ends at Mon 2023-07-17 22:42:06 UTC. --
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.611059928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ef6e1558-2736-4ca0-977e-e317b59d068d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.611401012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706838766953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689633706788693697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689633699575103623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689633698908787803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff
300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a,PodSandboxId:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689633659639272538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689633659672174382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afb
d8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da,PodSandboxId:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689633655643353499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689633642490840990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernet
es.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633641564900851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689633640878479989,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea,PodSandboxId:9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,State:CONTAINER_EXITED,CreatedAt:1689633637082147141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7,PodSandboxId:a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,State:CONTAINER_EXITED,CreatedAt:1689633634813369176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations
:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ef6e1558-2736-4ca0-977e-e317b59d068d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.663778880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9c3f86e-3fa2-4204-9525-dff5b51c9461 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.663911621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a9c3f86e-3fa2-4204-9525-dff5b51c9461 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.664334186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706838766953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689633706788693697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689633699575103623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689633698908787803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff
300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a,PodSandboxId:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689633659639272538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689633659672174382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afb
d8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da,PodSandboxId:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689633655643353499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689633642490840990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernet
es.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633641564900851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689633640878479989,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea,PodSandboxId:9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,State:CONTAINER_EXITED,CreatedAt:1689633637082147141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7,PodSandboxId:a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,State:CONTAINER_EXITED,CreatedAt:1689633634813369176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations
:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a9c3f86e-3fa2-4204-9525-dff5b51c9461 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.703522463Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=d9a28ebe-97ad-4cda-8f84-6c967874821b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.703862885Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-n5clq,Uid:fcf8c414-139d-4e80-b399-989e458a4a30,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689633638857635055,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:39:52.659757319Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-482945,Uid:bc8d774132f3e0d505df5afbd8cf90cf,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689633638767780899,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bc8d774132f3e0d505df5afbd8cf90cf,kubernetes.io/config.seen: 2023-07-17T22:39:40.598312795Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-482945,Uid:79007c8b63df44d0b74e723ffe8e6a07,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689633638717938162,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df4
4d0b74e723ffe8e6a07,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 79007c8b63df44d0b74e723ffe8e6a07,kubernetes.io/config.seen: 2023-07-17T22:39:40.598313546Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&PodSandboxMetadata{Name:etcd-pause-482945,Uid:67feb80efd1440a7d5575d681ff300a1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689633638710622040,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.117:2379,kubernetes.io/config.hash: 67feb80efd1440a7d5575d681ff300a1,kubernetes.io/config.seen: 2023-07-17T22:39:40.598308234Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox
{Id:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-482945,Uid:8cb0c3963dd9c9298d8758b4a0d5be12,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1689633638556670732,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.117:8443,kubernetes.io/config.hash: 8cb0c3963dd9c9298d8758b4a0d5be12,kubernetes.io/config.seen: 2023-07-17T22:39:40.598311776Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&PodSandboxMetadata{Name:kube-proxy-g265v,Uid:161f1f66-5158-437d-b56d-37ff4b108182,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,Cre
atedAt:1689633638471515004,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:39:52.242894557Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-482945,Uid:79007c8b63df44d0b74e723ffe8e6a07,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1689633633491376869,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,tier: control-plane,},Annotations:map[string]string{kubernetes.io/con
fig.hash: 79007c8b63df44d0b74e723ffe8e6a07,kubernetes.io/config.seen: 2023-07-17T22:39:40.598313546Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-482945,Uid:8cb0c3963dd9c9298d8758b4a0d5be12,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1689633633484547861,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.117:8443,kubernetes.io/config.hash: 8cb0c3963dd9c9298d8758b4a0d5be12,kubernetes.io/config.seen: 2023-07-17T22:39:40.598311776Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=d9a28ebe-97ad-
4cda-8f84-6c967874821b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.704824032Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aa485db2-0c3f-4184-b761-9197efd0971f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.704941692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aa485db2-0c3f-4184-b761-9197efd0971f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.705323148Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706838766953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689633706788693697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689633699575103623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689633698908787803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff
300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a,PodSandboxId:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689633659639272538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689633659672174382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afb
d8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da,PodSandboxId:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689633655643353499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689633642490840990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernet
es.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633641564900851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689633640878479989,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea,PodSandboxId:9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,State:CONTAINER_EXITED,CreatedAt:1689633637082147141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7,PodSandboxId:a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,State:CONTAINER_EXITED,CreatedAt:1689633634813369176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations
:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aa485db2-0c3f-4184-b761-9197efd0971f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.724407521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=57a7e53b-82be-4a22-9a55-88eecc762509 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.724517848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=57a7e53b-82be-4a22-9a55-88eecc762509 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.724820124Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706838766953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689633706788693697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689633699575103623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689633698908787803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff
300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a,PodSandboxId:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689633659639272538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689633659672174382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afb
d8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da,PodSandboxId:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689633655643353499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689633642490840990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernet
es.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633641564900851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689633640878479989,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea,PodSandboxId:9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,State:CONTAINER_EXITED,CreatedAt:1689633637082147141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7,PodSandboxId:a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,State:CONTAINER_EXITED,CreatedAt:1689633634813369176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations
:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=57a7e53b-82be-4a22-9a55-88eecc762509 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.782230112Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eda91ebd-5eaf-4b8d-8d96-4b9d8fcf692d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.782356872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eda91ebd-5eaf-4b8d-8d96-4b9d8fcf692d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.782697650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706838766953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689633706788693697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689633699575103623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689633698908787803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff
300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a,PodSandboxId:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689633659639272538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689633659672174382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afb
d8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da,PodSandboxId:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689633655643353499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689633642490840990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernet
es.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633641564900851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689633640878479989,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea,PodSandboxId:9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,State:CONTAINER_EXITED,CreatedAt:1689633637082147141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7,PodSandboxId:a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,State:CONTAINER_EXITED,CreatedAt:1689633634813369176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations
:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eda91ebd-5eaf-4b8d-8d96-4b9d8fcf692d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.841884986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6ae76bd7-43f8-4662-ae25-dc2ac81da035 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.842049088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6ae76bd7-43f8-4662-ae25-dc2ac81da035 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.842460419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706838766953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689633706788693697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689633699575103623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689633698908787803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff
300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a,PodSandboxId:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689633659639272538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689633659672174382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afb
d8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da,PodSandboxId:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689633655643353499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689633642490840990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernet
es.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633641564900851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689633640878479989,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea,PodSandboxId:9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,State:CONTAINER_EXITED,CreatedAt:1689633637082147141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7,PodSandboxId:a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,State:CONTAINER_EXITED,CreatedAt:1689633634813369176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations
:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6ae76bd7-43f8-4662-ae25-dc2ac81da035 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.903742596Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f0cafd03-f611-4e8e-94a0-49cf3ad582a8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.903865111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f0cafd03-f611-4e8e-94a0-49cf3ad582a8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.904312604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706838766953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689633706788693697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689633699575103623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689633698908787803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff
300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a,PodSandboxId:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689633659639272538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689633659672174382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afb
d8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da,PodSandboxId:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689633655643353499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689633642490840990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernet
es.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633641564900851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689633640878479989,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea,PodSandboxId:9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,State:CONTAINER_EXITED,CreatedAt:1689633637082147141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7,PodSandboxId:a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,State:CONTAINER_EXITED,CreatedAt:1689633634813369176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations
:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f0cafd03-f611-4e8e-94a0-49cf3ad582a8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.961736701Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0938b49c-8c6a-4794-95d0-c0b5ebed1762 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.961863998Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0938b49c-8c6a-4794-95d0-c0b5ebed1762 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 22:42:06 pause-482945 crio[2696]: time="2023-07-17 22:42:06.962327526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689633706838766953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689633706788693697,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernetes.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689633699575103623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afbd8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689633698908787803,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff
300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a,PodSandboxId:5dd7fbee8125875816da72975c257920abab1fa9dfe30c718d0dedbdb7d6c5a4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689633659639272538,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f,PodSandboxId:f3e38f10f9de54fe2388f3cceace6ef2353c69ec66e72584e6ec9a97aabe21e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_EXITED,CreatedAt:1689633659672174382,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc8d774132f3e0d505df5afb
d8cf90cf,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da,PodSandboxId:d8b88fe2f3540dea9a8286c2c9e590669182c6f49676f3495276c529ea02e4d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689633655643353499,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annot
ations:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e,PodSandboxId:31c8cd93d1d4a9b9aa6a1b3de8cc9d6e0ce996cebe35e6eadf33cdecbab4e311,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_EXITED,CreatedAt:1689633642490840990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g265v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161f1f66-5158-437d-b56d-37ff4b108182,},Annotations:map[string]string{io.kubernet
es.container.hash: a57fd1ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821,PodSandboxId:c45166f36f48cfad093e7aa17bf840a6de535f48696fbbe2847c686bc94ff7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1689633641564900851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-n5clq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf8c414-139d-4e80-b399-989e458a4a30,},Annotations:map[string]string{io.kubernetes.container.hash: 7cf71c81,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c,PodSandboxId:d7d895f37ef9da3a476d1341c1149d932ecc418360caa92d17d0583de687b8ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_EXITED,CreatedAt:1689633640878479989,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-482945,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67feb80efd1440a7d5575d681ff300a1,},Annotations:map[string]string{io.kubernetes.container.hash: 643d762b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea,PodSandboxId:9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,State:CONTAINER_EXITED,CreatedAt:1689633637082147141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 79007c8b63df44d0b74e723ffe8e6a07,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7,PodSandboxId:a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,State:CONTAINER_EXITED,CreatedAt:1689633634813369176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-482945,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb0c3963dd9c9298d8758b4a0d5be12,},Annotations
:map[string]string{io.kubernetes.container.hash: 5a7f9746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0938b49c-8c6a-4794-95d0-c0b5ebed1762 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	a7b1a3cec3d7f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   20 seconds ago       Running             coredns                   2                   c45166f36f48c
	f6a8f9e69f45d       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   20 seconds ago       Running             kube-proxy                2                   31c8cd93d1d4a
	1d2ae72714db8       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   27 seconds ago       Running             kube-controller-manager   3                   f3e38f10f9de5
	ce2ff2a1ecaa4       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   28 seconds ago       Running             etcd                      2                   d7d895f37ef9d
	3ad17ba250549       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   About a minute ago   Exited              kube-controller-manager   2                   f3e38f10f9de5
	7749c0ac83e21       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   About a minute ago   Running             kube-scheduler            2                   5dd7fbee81258
	56c149109b9f6       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   About a minute ago   Running             kube-apiserver            2                   d8b88fe2f3540
	7e3ec1be732d0       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   About a minute ago   Exited              kube-proxy                1                   31c8cd93d1d4a
	d5c8781aa5f31       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   1                   c45166f36f48c
	9f223e08272c4       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   About a minute ago   Exited              etcd                      1                   d7d895f37ef9d
	8d8373a804c69       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   About a minute ago   Exited              kube-scheduler            1                   9f8cf08949e73
	c1ca00541c56d       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   About a minute ago   Exited              kube-apiserver            1                   a0025da2db908
	
	* 
	* ==> coredns [a7b1a3cec3d7f08c4bec4b61e9223274e0170a12540d069fd0aff05e908d4daf] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40802 - 18718 "HINFO IN 333239259983893566.5303941569989730839. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013104622s
	
	* 
	* ==> coredns [d5c8781aa5f3178cdd590e8e9479569ad7fefdee40094e5ba8b19699b9197821] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41056 - 9930 "HINFO IN 3443356411995585141.4116104925062061114. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00733953s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-482945
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-482945
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=pause-482945
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_39_40_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:39:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-482945
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:42:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:41:47 +0000   Mon, 17 Jul 2023 22:39:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:41:47 +0000   Mon, 17 Jul 2023 22:39:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:41:47 +0000   Mon, 17 Jul 2023 22:39:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:41:47 +0000   Mon, 17 Jul 2023 22:41:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.117
	  Hostname:    pause-482945
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f33221eacfb410c82330fe610b7ef04
	  System UUID:                8f33221e-acfb-410c-8233-0fe610b7ef04
	  Boot ID:                    539e8f43-9b69-43c9-b849-1055e701ed92
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-n5clq                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m15s
	  kube-system                 etcd-pause-482945                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-apiserver-pause-482945             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-controller-manager-pause-482945    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-proxy-g265v                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kube-system                 kube-scheduler-pause-482945             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m11s              kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m27s              kubelet          Node pause-482945 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m27s              kubelet          Node pause-482945 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m27s              kubelet          Node pause-482945 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m27s              kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m27s              kubelet          Starting kubelet.
	  Normal  NodeReady                2m26s              kubelet          Node pause-482945 status is now: NodeReady
	  Normal  RegisteredNode           2m16s              node-controller  Node pause-482945 event: Registered Node pause-482945 in Controller
	  Normal  NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    29s (x2 over 63s)  kubelet          Node pause-482945 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x2 over 63s)  kubelet          Node pause-482945 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  29s (x2 over 63s)  kubelet          Node pause-482945 status is now: NodeHasSufficientMemory
	  Normal  NodeNotReady             26s                kubelet          Node pause-482945 status is now: NodeNotReady
	  Normal  NodeReady                20s                kubelet          Node pause-482945 status is now: NodeReady
	  Normal  RegisteredNode           10s                node-controller  Node pause-482945 event: Registered Node pause-482945 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.071855] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.629841] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.513788] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.175109] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.324413] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.532129] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.125339] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.163833] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.126203] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.268383] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +9.085279] systemd-fstab-generator[928]: Ignoring "noauto" for root device
	[  +9.863013] systemd-fstab-generator[1259]: Ignoring "noauto" for root device
	[Jul17 22:40] kauditd_printk_skb: 28 callbacks suppressed
	[  +1.479638] systemd-fstab-generator[2404]: Ignoring "noauto" for root device
	[  +0.416051] systemd-fstab-generator[2468]: Ignoring "noauto" for root device
	[  +0.380208] systemd-fstab-generator[2499]: Ignoring "noauto" for root device
	[  +0.348395] systemd-fstab-generator[2510]: Ignoring "noauto" for root device
	[  +0.651388] systemd-fstab-generator[2544]: Ignoring "noauto" for root device
	[  +1.809851] kauditd_printk_skb: 8 callbacks suppressed
	[Jul17 22:41] systemd-fstab-generator[3744]: Ignoring "noauto" for root device
	[ +43.692051] kauditd_printk_skb: 8 callbacks suppressed
	[Jul17 22:42] hrtimer: interrupt took 2823355 ns
	
	* 
	* ==> etcd [9f223e08272c4313fc7d0e1fdadd144a278bc3ec384d59b4017acf363e60896c] <==
	* {"level":"info","ts":"2023-07-17T22:40:42.191Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T22:40:42.191Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58dbdb76d15f9806","initial-advertise-peer-urls":["https://192.168.61.117:2380"],"listen-peer-urls":["https://192.168.61.117:2380"],"advertise-client-urls":["https://192.168.61.117:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.117:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T22:40:42.191Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T22:40:42.191Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.61.117:2380"}
	{"level":"info","ts":"2023-07-17T22:40:42.191Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.117:2380"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 received MsgPreVoteResp from 58dbdb76d15f9806 at term 2"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 became candidate at term 3"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 received MsgVoteResp from 58dbdb76d15f9806 at term 3"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T22:40:43.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58dbdb76d15f9806 elected leader 58dbdb76d15f9806 at term 3"}
	{"level":"info","ts":"2023-07-17T22:40:43.630Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:40:43.631Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58dbdb76d15f9806","local-member-attributes":"{Name:pause-482945 ClientURLs:[https://192.168.61.117:2379]}","request-path":"/0/members/58dbdb76d15f9806/attributes","cluster-id":"b6f25112358c5425","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:40:43.631Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:40:43.631Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.117:2379"}
	{"level":"info","ts":"2023-07-17T22:40:43.632Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:40:43.632Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T22:40:43.632Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T22:41:01.280Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-07-17T22:41:01.280Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-482945","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.117:2380"],"advertise-client-urls":["https://192.168.61.117:2379"]}
	{"level":"info","ts":"2023-07-17T22:41:01.368Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"58dbdb76d15f9806","current-leader-member-id":"58dbdb76d15f9806"}
	{"level":"info","ts":"2023-07-17T22:41:01.375Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.61.117:2380"}
	{"level":"info","ts":"2023-07-17T22:41:01.378Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.61.117:2380"}
	{"level":"info","ts":"2023-07-17T22:41:01.378Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-482945","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.117:2380"],"advertise-client-urls":["https://192.168.61.117:2379"]}
	
	* 
	* ==> etcd [ce2ff2a1ecaa45bdf6aa396fef0057b97c17bfa8ed5d7949f80fa0ab22edf0c2] <==
	* {"level":"info","ts":"2023-07-17T22:41:39.497Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:41:39.497Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:41:39.498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 switched to configuration voters=(6402952598602618886)"}
	{"level":"info","ts":"2023-07-17T22:41:39.498Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b6f25112358c5425","local-member-id":"58dbdb76d15f9806","added-peer-id":"58dbdb76d15f9806","added-peer-peer-urls":["https://192.168.61.117:2380"]}
	{"level":"info","ts":"2023-07-17T22:41:39.498Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b6f25112358c5425","local-member-id":"58dbdb76d15f9806","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:41:39.498Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:41:39.499Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T22:41:39.500Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58dbdb76d15f9806","initial-advertise-peer-urls":["https://192.168.61.117:2380"],"listen-peer-urls":["https://192.168.61.117:2380"],"advertise-client-urls":["https://192.168.61.117:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.117:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T22:41:39.500Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T22:41:39.500Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.61.117:2380"}
	{"level":"info","ts":"2023-07-17T22:41:39.500Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.117:2380"}
	{"level":"info","ts":"2023-07-17T22:41:40.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 is starting a new election at term 3"}
	{"level":"info","ts":"2023-07-17T22:41:40.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-07-17T22:41:40.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 received MsgPreVoteResp from 58dbdb76d15f9806 at term 3"}
	{"level":"info","ts":"2023-07-17T22:41:40.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 became candidate at term 4"}
	{"level":"info","ts":"2023-07-17T22:41:40.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 received MsgVoteResp from 58dbdb76d15f9806 at term 4"}
	{"level":"info","ts":"2023-07-17T22:41:40.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58dbdb76d15f9806 became leader at term 4"}
	{"level":"info","ts":"2023-07-17T22:41:40.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58dbdb76d15f9806 elected leader 58dbdb76d15f9806 at term 4"}
	{"level":"info","ts":"2023-07-17T22:41:40.587Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58dbdb76d15f9806","local-member-attributes":"{Name:pause-482945 ClientURLs:[https://192.168.61.117:2379]}","request-path":"/0/members/58dbdb76d15f9806/attributes","cluster-id":"b6f25112358c5425","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:41:40.587Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:41:40.589Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.117:2379"}
	{"level":"info","ts":"2023-07-17T22:41:40.592Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:41:40.595Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T22:41:40.596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:41:40.596Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  22:42:07 up 3 min,  0 users,  load average: 1.13, 0.67, 0.27
	Linux pause-482945 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [56c149109b9f6b9129034c4ee0f4717dde40e5abae9aa5b9cbacae788cfe33da] <==
	* Trace[111396636]: [6.811653063s] [6.811653063s] END
	I0717 22:41:46.606213       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 22:41:47.167337       1 trace.go:219] Trace[1469252900]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:300e2b27-7aed-41d0-b605-da399e1b940d,client:192.168.61.117,protocol:HTTP/2.0,resource:csinodes,scope:resource,url:/apis/storage.k8s.io/v1/csinodes/pause-482945,user-agent:kubelet/v1.27.3 (linux/amd64) kubernetes/25b4e43,verb:GET (17-Jul-2023 22:41:04.101) (total time: 43065ms):
	Trace[1469252900]: ---"About to write a response" 43065ms (22:41:47.166)
	Trace[1469252900]: [43.065391777s] [43.065391777s] END
	I0717 22:41:47.523791       1 trace.go:219] Trace[1433160230]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:fa29b799-86fc-4b87-b821-3c8b1f767ac3,client:192.168.61.117,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.27.3 (linux/amd64) kubernetes/25b4e43,verb:POST (17-Jul-2023 22:41:38.095) (total time: 9428ms):
	Trace[1433160230]: ["Create etcd3" audit-id:fa29b799-86fc-4b87-b821-3c8b1f767ac3,key:/events/default/pause-482945.1772c8dd94399619,type:*core.Event,resource:events 9427ms (22:41:38.096)
	Trace[1433160230]:  ---"TransformToStorage succeeded" 9421ms (22:41:47.518)]
	Trace[1433160230]: [9.428564837s] [9.428564837s] END
	I0717 22:41:47.527193       1 trace.go:219] Trace[752774258]: "GuaranteedUpdate etcd3" audit-id:,key:/ranges/serviceips,type:*core.RangeAllocation,resource:serviceipallocations (17-Jul-2023 22:41:46.629) (total time: 897ms):
	Trace[752774258]: ---"initial value restored" 897ms (22:41:47.527)
	Trace[752774258]: [897.73929ms] [897.73929ms] END
	I0717 22:41:47.527482       1 trace.go:219] Trace[1559249855]: "Create" accept:application/json, */*,audit-id:9c486d77-cb1d-4033-b3a7-a0a0c0d39c79,client:192.168.61.117,protocol:HTTP/2.0,resource:services,scope:resource,url:/api/v1/namespaces/kube-system/services,user-agent:kubeadm/v1.27.3 (linux/amd64) kubernetes/25b4e43,verb:POST (17-Jul-2023 22:41:46.628) (total time: 899ms):
	Trace[1559249855]: ---"Write to database call failed" len:610,err:Service "kube-dns" is invalid: spec.clusterIPs: Invalid value: []string{"10.96.0.10"}: failed to allocate IP 10.96.0.10: provided IP is already allocated 898ms (22:41:47.527)
	Trace[1559249855]: [899.084314ms] [899.084314ms] END
	I0717 22:41:47.561546       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 22:41:47.646932       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 22:41:47.673610       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 22:41:48.019253       1 trace.go:219] Trace[81677459]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:fe8892c6-5195-431c-a4b0-84dc55f09e87,client:192.168.61.117,protocol:HTTP/2.0,resource:events,scope:resource,url:/apis/events.k8s.io/v1/namespaces/default/events,user-agent:kube-proxy/v1.27.3 (linux/amd64) kubernetes/25b4e43,verb:POST (17-Jul-2023 22:41:47.279) (total time: 739ms):
	Trace[81677459]: ["Create etcd3" audit-id:fe8892c6-5195-431c-a4b0-84dc55f09e87,key:/events/default/pause-482945.1772c8e799b2045f,type:*core.Event,resource:events 736ms (22:41:47.283)
	Trace[81677459]:  ---"TransformToStorage succeeded" 733ms (22:41:48.016)]
	Trace[81677459]: [739.369292ms] [739.369292ms] END
	I0717 22:41:57.557498       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 22:41:57.568541       1 controller.go:624] quota admission added evaluator for: endpoints
	I0717 22:41:59.317486       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [c1ca00541c56d22fe889137e6dae933b737e0deb8f00b9e9e0c8eba1b70894c7] <==
	* 
	* 
	* ==> kube-controller-manager [1d2ae72714db8f4ff8c3fc755f466154f9817b1a38db0d90c1de112207dff34a] <==
	* I0717 22:41:57.540655       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0717 22:41:57.544234       1 shared_informer.go:318] Caches are synced for daemon sets
	I0717 22:41:57.545617       1 shared_informer.go:318] Caches are synced for endpoint
	I0717 22:41:57.547727       1 shared_informer.go:318] Caches are synced for job
	I0717 22:41:57.552140       1 shared_informer.go:318] Caches are synced for disruption
	I0717 22:41:57.554404       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0717 22:41:57.558544       1 shared_informer.go:318] Caches are synced for TTL
	I0717 22:41:57.568821       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0717 22:41:57.595614       1 shared_informer.go:318] Caches are synced for taint
	I0717 22:41:57.596132       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0717 22:41:57.596399       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-482945"
	I0717 22:41:57.596450       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0717 22:41:57.596470       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0717 22:41:57.596485       1 taint_manager.go:211] "Sending events to api server"
	I0717 22:41:57.597186       1 event.go:307] "Event occurred" object="pause-482945" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-482945 event: Registered Node pause-482945 in Controller"
	I0717 22:41:57.602543       1 shared_informer.go:318] Caches are synced for namespace
	I0717 22:41:57.638815       1 shared_informer.go:318] Caches are synced for attach detach
	I0717 22:41:57.693096       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 22:41:57.709866       1 shared_informer.go:318] Caches are synced for persistent volume
	I0717 22:41:57.764886       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 22:41:58.087374       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 22:41:58.087462       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 22:41:58.101929       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 22:41:59.326223       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0717 22:41:59.346063       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-dk4wn"
	
	* 
	* ==> kube-controller-manager [3ad17ba250549dc1c6d445b231868022a4136390ccc25bacf622646bf1ea018f] <==
	* I0717 22:41:01.623077       1 serving.go:348] Generated self-signed cert in-memory
	I0717 22:41:02.205854       1 controllermanager.go:187] "Starting" version="v1.27.3"
	I0717 22:41:02.206084       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:41:02.209148       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0717 22:41:02.210618       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 22:41:02.210939       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 22:41:02.211198       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0717 22:41:14.241652       1 controllermanager.go:233] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]po
ststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	* 
	* ==> kube-proxy [7e3ec1be732d0fd461040ac16ca86d4c18696586aa8e9f36da865743a34d034e] <==
	* E0717 22:40:42.663212       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-482945": dial tcp 192.168.61.117:8443: connect: connection refused
	E0717 22:40:43.801591       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-482945": dial tcp 192.168.61.117:8443: connect: connection refused
	E0717 22:40:45.982623       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-482945": dial tcp 192.168.61.117:8443: connect: connection refused
	E0717 22:40:50.240713       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-482945": dial tcp 192.168.61.117:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [f6a8f9e69f45d4a6a76c15b6645a0c5795d518593044528d26e43dd0e95b4308] <==
	* I0717 22:41:47.183291       1 node.go:141] Successfully retrieved node IP: 192.168.61.117
	I0717 22:41:47.183484       1 server_others.go:110] "Detected node IP" address="192.168.61.117"
	I0717 22:41:47.183576       1 server_others.go:554] "Using iptables proxy"
	I0717 22:41:47.256704       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 22:41:47.256758       1 server_others.go:192] "Using iptables Proxier"
	I0717 22:41:47.256838       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 22:41:47.257843       1 server.go:658] "Version info" version="v1.27.3"
	I0717 22:41:47.257896       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:41:47.259446       1 config.go:188] "Starting service config controller"
	I0717 22:41:47.259518       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 22:41:47.259552       1 config.go:97] "Starting endpoint slice config controller"
	I0717 22:41:47.259556       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 22:41:47.262944       1 config.go:315] "Starting node config controller"
	I0717 22:41:47.263088       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 22:41:47.360414       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 22:41:47.360639       1 shared_informer.go:318] Caches are synced for service config
	I0717 22:41:47.363151       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7749c0ac83e21eb203e9057eaa6f99e45faf7da9763ce362db9940be8913148a] <==
	* I0717 22:41:01.693694       1 serving.go:348] Generated self-signed cert in-memory
	I0717 22:41:43.499395       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 22:41:43.499461       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:41:43.504820       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 22:41:43.504967       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0717 22:41:43.505118       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0717 22:41:43.505226       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 22:41:43.508382       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 22:41:43.508500       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:41:43.508523       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0717 22:41:43.508614       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 22:41:43.605700       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0717 22:41:43.609420       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 22:41:43.609694       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [8d8373a804c69ba8d1619648497e02da7e9871a87beae160c2ce8150e37fc0ea] <==
	* 
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 22:39:09 UTC, ends at Mon 2023-07-17 22:42:07 UTC. --
	Jul 17 22:42:04 pause-482945 kubelet[3750]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 22:42:04 pause-482945 kubelet[3750]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.622216    3750 manager.go:1106] Failed to create existing container: /kubepods/burstable/podbc8d774132f3e0d505df5afbd8cf90cf/crio-df80402bed589683bbd5973ac968bd2cf6db69fbc26c80a050d3f5c2dd05ac45: Error finding container df80402bed589683bbd5973ac968bd2cf6db69fbc26c80a050d3f5c2dd05ac45: Status 404 returned error can't find the container with id df80402bed589683bbd5973ac968bd2cf6db69fbc26c80a050d3f5c2dd05ac45
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.623070    3750 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod79007c8b63df44d0b74e723ffe8e6a07/crio-9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3: Error finding container 9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3: Status 404 returned error can't find the container with id 9f8cf08949e7302b373ff6a2533d1550dc21c212ed789d76063b3ce7c73853d3
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.624364    3750 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod8cb0c3963dd9c9298d8758b4a0d5be12/crio-a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da: Error finding container a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da: Status 404 returned error can't find the container with id a0025da2db90835f245b5f8a6d0bc9e679beedcd18b51800b617c700ae53d5da
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.681127    3750 scope.go:115] "RemoveContainer" containerID="2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.721657    3750 scope.go:115] "RemoveContainer" containerID="06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.735932    3750 scope.go:115] "RemoveContainer" containerID="06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.781714    3750 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8fsl\" (UniqueName: \"kubernetes.io/projected/d503aa06-1a7d-405f-8a1d-7c97f5901d9c-kube-api-access-j8fsl\") pod \"d503aa06-1a7d-405f-8a1d-7c97f5901d9c\" (UID: \"d503aa06-1a7d-405f-8a1d-7c97f5901d9c\") "
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.781767    3750 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d503aa06-1a7d-405f-8a1d-7c97f5901d9c-config-volume\") pod \"d503aa06-1a7d-405f-8a1d-7c97f5901d9c\" (UID: \"d503aa06-1a7d-405f-8a1d-7c97f5901d9c\") "
	Jul 17 22:42:04 pause-482945 kubelet[3750]: W0717 22:42:04.782114    3750 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d503aa06-1a7d-405f-8a1d-7c97f5901d9c/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.782348    3750 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d503aa06-1a7d-405f-8a1d-7c97f5901d9c-config-volume" (OuterVolumeSpecName: "config-volume") pod "d503aa06-1a7d-405f-8a1d-7c97f5901d9c" (UID: "d503aa06-1a7d-405f-8a1d-7c97f5901d9c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.786553    3750 scope.go:115] "RemoveContainer" containerID="2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.787358    3750 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86\": container with ID starting with 2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86 not found: ID does not exist" containerID="2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.787408    3750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86} err="failed to get container status \"2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86\": rpc error: code = NotFound desc = could not find container \"2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86\": container with ID starting with 2bb26acf8425b19f67ced9fdf9b40e79fc7302c4fd666183ac14e0d1dcce5d86 not found: ID does not exist"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.787460    3750 scope.go:115] "RemoveContainer" containerID="06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.788258    3750 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59\": container with ID starting with 06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59 not found: ID does not exist" containerID="06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.788330    3750 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59} err="failed to get container status \"06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59\": rpc error: code = NotFound desc = could not find container \"06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59\": container with ID starting with 06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59 not found: ID does not exist"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.805447    3750 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d503aa06-1a7d-405f-8a1d-7c97f5901d9c-kube-api-access-j8fsl" (OuterVolumeSpecName: "kube-api-access-j8fsl") pod "d503aa06-1a7d-405f-8a1d-7c97f5901d9c" (UID: "d503aa06-1a7d-405f-8a1d-7c97f5901d9c"). InnerVolumeSpecName "kube-api-access-j8fsl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.820857    3750 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = no such id: '06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59'" containerID="06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.821118    3750 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = no such id: '06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59'" containerID="06bb567206e635dcd0a8952bebe235dcfe307f7da66faa94045ef7bf035fbf59"
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.883432    3750 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j8fsl\" (UniqueName: \"kubernetes.io/projected/d503aa06-1a7d-405f-8a1d-7c97f5901d9c-kube-api-access-j8fsl\") on node \"pause-482945\" DevicePath \"\""
	Jul 17 22:42:04 pause-482945 kubelet[3750]: I0717 22:42:04.883507    3750 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d503aa06-1a7d-405f-8a1d-7c97f5901d9c-config-volume\") on node \"pause-482945\" DevicePath \"\""
	Jul 17 22:42:04 pause-482945 kubelet[3750]: E0717 22:42:04.928745    3750 cadvisor_stats_provider.go:442] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/podd503aa06-1a7d-405f-8a1d-7c97f5901d9c/crio-b7b26560eed5dde0ff5750e33522c33f5b537097e79826022dd708495760f8f2\": RecentStats: unable to find data in memory cache], [\"/kubepods/burstable/podd503aa06-1a7d-405f-8a1d-7c97f5901d9c/crio-conmon-b7b26560eed5dde0ff5750e33522c33f5b537097e79826022dd708495760f8f2\": RecentStats: unable to find data in memory cache]"
	Jul 17 22:42:06 pause-482945 kubelet[3750]: I0717 22:42:06.173727    3750 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d503aa06-1a7d-405f-8a1d-7c97f5901d9c path="/var/lib/kubelet/pods/d503aa06-1a7d-405f-8a1d-7c97f5901d9c/volumes"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-482945 -n pause-482945
helpers_test.go:261: (dbg) Run:  kubectl --context pause-482945 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (131.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-332820 --alsologtostderr -v=3
E0717 22:43:11.892512   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-332820 --alsologtostderr -v=3: exit status 82 (2m1.092952876s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-332820"  ...
	* Stopping node "old-k8s-version-332820"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:43:00.874308   52739 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:43:00.874512   52739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:43:00.874540   52739 out.go:309] Setting ErrFile to fd 2...
	I0717 22:43:00.874557   52739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:43:00.874790   52739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:43:00.875083   52739 out.go:303] Setting JSON to false
	I0717 22:43:00.875220   52739 mustload.go:65] Loading cluster: old-k8s-version-332820
	I0717 22:43:00.875572   52739 config.go:182] Loaded profile config "old-k8s-version-332820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 22:43:00.875713   52739 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/config.json ...
	I0717 22:43:00.875954   52739 mustload.go:65] Loading cluster: old-k8s-version-332820
	I0717 22:43:00.876130   52739 config.go:182] Loaded profile config "old-k8s-version-332820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 22:43:00.876179   52739 stop.go:39] StopHost: old-k8s-version-332820
	I0717 22:43:00.876570   52739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:43:00.876640   52739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:43:00.892257   52739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37935
	I0717 22:43:00.892734   52739 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:43:00.893512   52739 main.go:141] libmachine: Using API Version  1
	I0717 22:43:00.893549   52739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:43:00.893903   52739 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:43:00.896459   52739 out.go:177] * Stopping node "old-k8s-version-332820"  ...
	I0717 22:43:00.898286   52739 main.go:141] libmachine: Stopping "old-k8s-version-332820"...
	I0717 22:43:00.898312   52739 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:43:00.900034   52739 main.go:141] libmachine: (old-k8s-version-332820) Calling .Stop
	I0717 22:43:00.904954   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 0/60
	I0717 22:43:01.907506   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 1/60
	I0717 22:43:02.909050   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 2/60
	I0717 22:43:03.910625   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 3/60
	I0717 22:43:04.912870   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 4/60
	I0717 22:43:05.914496   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 5/60
	I0717 22:43:06.916228   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 6/60
	I0717 22:43:07.917855   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 7/60
	I0717 22:43:08.919918   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 8/60
	I0717 22:43:09.921672   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 9/60
	I0717 22:43:10.923908   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 10/60
	I0717 22:43:11.925403   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 11/60
	I0717 22:43:12.926720   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 12/60
	I0717 22:43:13.927835   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 13/60
	I0717 22:43:14.932993   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 14/60
	I0717 22:43:15.935303   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 15/60
	I0717 22:43:16.937653   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 16/60
	I0717 22:43:17.939196   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 17/60
	I0717 22:43:18.940760   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 18/60
	I0717 22:43:19.942764   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 19/60
	I0717 22:43:20.945107   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 20/60
	I0717 22:43:21.947002   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 21/60
	I0717 22:43:22.949097   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 22/60
	I0717 22:43:23.963552   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 23/60
	I0717 22:43:24.964173   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 24/60
	I0717 22:43:25.966329   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 25/60
	I0717 22:43:26.968044   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 26/60
	I0717 22:43:27.970368   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 27/60
	I0717 22:43:28.972250   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 28/60
	I0717 22:43:29.973819   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 29/60
	I0717 22:43:30.975826   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 30/60
	I0717 22:43:31.977187   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 31/60
	I0717 22:43:32.978583   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 32/60
	I0717 22:43:33.979900   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 33/60
	I0717 22:43:34.981487   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 34/60
	I0717 22:43:35.983670   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 35/60
	I0717 22:43:36.985008   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 36/60
	I0717 22:43:37.986766   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 37/60
	I0717 22:43:38.988945   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 38/60
	I0717 22:43:39.990267   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 39/60
	I0717 22:43:40.992333   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 40/60
	I0717 22:43:41.994406   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 41/60
	I0717 22:43:42.996032   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 42/60
	I0717 22:43:43.998219   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 43/60
	I0717 22:43:45.000347   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 44/60
	I0717 22:43:46.002232   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 45/60
	I0717 22:43:47.004136   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 46/60
	I0717 22:43:48.005488   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 47/60
	I0717 22:43:49.006969   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 48/60
	I0717 22:43:50.008527   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 49/60
	I0717 22:43:51.010201   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 50/60
	I0717 22:43:52.012287   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 51/60
	I0717 22:43:53.013812   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 52/60
	I0717 22:43:54.016295   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 53/60
	I0717 22:43:55.017685   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 54/60
	I0717 22:43:56.019942   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 55/60
	I0717 22:43:57.021366   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 56/60
	I0717 22:43:58.022969   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 57/60
	I0717 22:43:59.024425   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 58/60
	I0717 22:44:00.027075   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 59/60
	I0717 22:44:01.028444   52739 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 22:44:01.028514   52739 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 22:44:01.028539   52739 retry.go:31] will retry after 760.380804ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 22:44:01.789456   52739 stop.go:39] StopHost: old-k8s-version-332820
	I0717 22:44:01.789811   52739 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:44:01.789850   52739 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:44:01.804406   52739 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42265
	I0717 22:44:01.804855   52739 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:44:01.805293   52739 main.go:141] libmachine: Using API Version  1
	I0717 22:44:01.805308   52739 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:44:01.805618   52739 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:44:01.807681   52739 out.go:177] * Stopping node "old-k8s-version-332820"  ...
	I0717 22:44:01.809004   52739 main.go:141] libmachine: Stopping "old-k8s-version-332820"...
	I0717 22:44:01.809021   52739 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:44:01.810768   52739 main.go:141] libmachine: (old-k8s-version-332820) Calling .Stop
	I0717 22:44:01.813866   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 0/60
	I0717 22:44:02.815936   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 1/60
	I0717 22:44:03.817269   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 2/60
	I0717 22:44:04.818606   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 3/60
	I0717 22:44:05.820901   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 4/60
	I0717 22:44:06.822291   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 5/60
	I0717 22:44:07.824477   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 6/60
	I0717 22:44:08.826134   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 7/60
	I0717 22:44:09.828371   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 8/60
	I0717 22:44:10.830429   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 9/60
	I0717 22:44:11.831997   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 10/60
	I0717 22:44:12.833566   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 11/60
	I0717 22:44:13.835397   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 12/60
	I0717 22:44:14.836928   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 13/60
	I0717 22:44:15.838578   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 14/60
	I0717 22:44:16.841083   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 15/60
	I0717 22:44:17.842297   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 16/60
	I0717 22:44:18.843911   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 17/60
	I0717 22:44:19.846119   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 18/60
	I0717 22:44:20.848422   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 19/60
	I0717 22:44:21.850274   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 20/60
	I0717 22:44:22.851660   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 21/60
	I0717 22:44:23.852863   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 22/60
	I0717 22:44:24.854313   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 23/60
	I0717 22:44:25.856319   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 24/60
	I0717 22:44:26.857858   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 25/60
	I0717 22:44:27.859481   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 26/60
	I0717 22:44:28.861901   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 27/60
	I0717 22:44:29.864134   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 28/60
	I0717 22:44:30.865380   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 29/60
	I0717 22:44:31.867315   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 30/60
	I0717 22:44:32.869384   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 31/60
	I0717 22:44:33.871724   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 32/60
	I0717 22:44:34.873276   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 33/60
	I0717 22:44:35.874566   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 34/60
	I0717 22:44:36.875998   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 35/60
	I0717 22:44:37.877163   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 36/60
	I0717 22:44:38.878669   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 37/60
	I0717 22:44:39.879961   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 38/60
	I0717 22:44:40.881414   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 39/60
	I0717 22:44:41.883997   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 40/60
	I0717 22:44:42.885503   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 41/60
	I0717 22:44:43.887559   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 42/60
	I0717 22:44:44.889148   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 43/60
	I0717 22:44:45.890741   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 44/60
	I0717 22:44:46.891971   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 45/60
	I0717 22:44:47.893369   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 46/60
	I0717 22:44:48.894823   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 47/60
	I0717 22:44:49.896666   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 48/60
	I0717 22:44:50.897983   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 49/60
	I0717 22:44:51.899744   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 50/60
	I0717 22:44:52.901349   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 51/60
	I0717 22:44:53.902995   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 52/60
	I0717 22:44:54.904411   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 53/60
	I0717 22:44:55.906375   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 54/60
	I0717 22:44:56.908760   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 55/60
	I0717 22:44:57.910576   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 56/60
	I0717 22:44:58.912156   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 57/60
	I0717 22:44:59.914396   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 58/60
	I0717 22:45:00.916679   52739 main.go:141] libmachine: (old-k8s-version-332820) Waiting for machine to stop 59/60
	I0717 22:45:01.917545   52739 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 22:45:01.917586   52739 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 22:45:01.919496   52739 out.go:177] 
	W0717 22:45:01.921115   52739 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 22:45:01.921133   52739 out.go:239] * 
	* 
	W0717 22:45:01.923471   52739 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 22:45:01.925018   52739 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-332820 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-332820 -n old-k8s-version-332820
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-332820 -n old-k8s-version-332820: exit status 3 (18.506682412s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:45:20.433815   53593 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.149:22: connect: no route to host
	E0717 22:45:20.433834   53593 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.149:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-332820" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-571296 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-571296 --alsologtostderr -v=3: exit status 82 (2m1.726161425s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-571296"  ...
	* Stopping node "embed-certs-571296"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:44:20.408488   53357 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:44:20.408678   53357 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:44:20.408714   53357 out.go:309] Setting ErrFile to fd 2...
	I0717 22:44:20.408732   53357 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:44:20.408973   53357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:44:20.409761   53357 out.go:303] Setting JSON to false
	I0717 22:44:20.409926   53357 mustload.go:65] Loading cluster: embed-certs-571296
	I0717 22:44:20.412139   53357 config.go:182] Loaded profile config "embed-certs-571296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:44:20.412432   53357 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/config.json ...
	I0717 22:44:20.412704   53357 mustload.go:65] Loading cluster: embed-certs-571296
	I0717 22:44:20.412933   53357 config.go:182] Loaded profile config "embed-certs-571296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:44:20.412976   53357 stop.go:39] StopHost: embed-certs-571296
	I0717 22:44:20.414087   53357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:44:20.414184   53357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:44:20.430261   53357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40675
	I0717 22:44:20.430766   53357 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:44:20.431431   53357 main.go:141] libmachine: Using API Version  1
	I0717 22:44:20.431459   53357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:44:20.431844   53357 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:44:20.434311   53357 out.go:177] * Stopping node "embed-certs-571296"  ...
	I0717 22:44:20.436818   53357 main.go:141] libmachine: Stopping "embed-certs-571296"...
	I0717 22:44:20.436850   53357 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:44:20.439071   53357 main.go:141] libmachine: (embed-certs-571296) Calling .Stop
	I0717 22:44:20.443303   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 0/60
	I0717 22:44:21.444755   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 1/60
	I0717 22:44:22.446421   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 2/60
	I0717 22:44:23.447917   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 3/60
	I0717 22:44:24.449303   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 4/60
	I0717 22:44:25.451203   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 5/60
	I0717 22:44:26.452491   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 6/60
	I0717 22:44:27.453876   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 7/60
	I0717 22:44:28.456036   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 8/60
	I0717 22:44:29.457258   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 9/60
	I0717 22:44:30.459131   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 10/60
	I0717 22:44:31.460485   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 11/60
	I0717 22:44:32.462021   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 12/60
	I0717 22:44:33.463409   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 13/60
	I0717 22:44:34.464740   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 14/60
	I0717 22:44:35.466821   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 15/60
	I0717 22:44:36.468139   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 16/60
	I0717 22:44:37.469358   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 17/60
	I0717 22:44:38.470631   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 18/60
	I0717 22:44:39.471847   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 19/60
	I0717 22:44:40.473695   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 20/60
	I0717 22:44:41.475292   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 21/60
	I0717 22:44:42.476660   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 22/60
	I0717 22:44:43.478302   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 23/60
	I0717 22:44:44.480900   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 24/60
	I0717 22:44:45.483027   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 25/60
	I0717 22:44:46.484514   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 26/60
	I0717 22:44:47.485815   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 27/60
	I0717 22:44:48.488317   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 28/60
	I0717 22:44:49.490480   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 29/60
	I0717 22:44:50.492503   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 30/60
	I0717 22:44:51.494142   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 31/60
	I0717 22:44:52.495756   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 32/60
	I0717 22:44:53.497246   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 33/60
	I0717 22:44:54.501191   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 34/60
	I0717 22:44:55.503217   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 35/60
	I0717 22:44:56.504803   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 36/60
	I0717 22:44:57.506394   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 37/60
	I0717 22:44:58.508032   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 38/60
	I0717 22:44:59.509090   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 39/60
	I0717 22:45:00.511373   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 40/60
	I0717 22:45:01.512953   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 41/60
	I0717 22:45:02.514747   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 42/60
	I0717 22:45:03.516275   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 43/60
	I0717 22:45:04.517968   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 44/60
	I0717 22:45:05.519927   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 45/60
	I0717 22:45:06.521568   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 46/60
	I0717 22:45:07.523063   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 47/60
	I0717 22:45:08.524477   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 48/60
	I0717 22:45:09.525869   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 49/60
	I0717 22:45:10.527982   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 50/60
	I0717 22:45:11.529417   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 51/60
	I0717 22:45:12.530999   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 52/60
	I0717 22:45:13.532522   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 53/60
	I0717 22:45:14.533954   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 54/60
	I0717 22:45:15.535870   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 55/60
	I0717 22:45:16.537277   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 56/60
	I0717 22:45:17.538792   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 57/60
	I0717 22:45:18.540189   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 58/60
	I0717 22:45:19.541536   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 59/60
	I0717 22:45:20.542503   53357 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 22:45:20.542536   53357 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 22:45:20.542551   53357 retry.go:31] will retry after 1.408905056s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 22:45:21.952046   53357 stop.go:39] StopHost: embed-certs-571296
	I0717 22:45:21.952429   53357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:45:21.952486   53357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:45:21.966995   53357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35297
	I0717 22:45:21.967416   53357 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:45:21.967870   53357 main.go:141] libmachine: Using API Version  1
	I0717 22:45:21.967892   53357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:45:21.968169   53357 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:45:21.970312   53357 out.go:177] * Stopping node "embed-certs-571296"  ...
	I0717 22:45:21.971663   53357 main.go:141] libmachine: Stopping "embed-certs-571296"...
	I0717 22:45:21.971679   53357 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:45:21.973260   53357 main.go:141] libmachine: (embed-certs-571296) Calling .Stop
	I0717 22:45:21.976697   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 0/60
	I0717 22:45:22.978231   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 1/60
	I0717 22:45:23.979721   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 2/60
	I0717 22:45:24.980981   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 3/60
	I0717 22:45:25.982464   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 4/60
	I0717 22:45:26.984371   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 5/60
	I0717 22:45:27.985882   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 6/60
	I0717 22:45:28.987403   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 7/60
	I0717 22:45:29.988921   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 8/60
	I0717 22:45:30.990284   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 9/60
	I0717 22:45:31.992422   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 10/60
	I0717 22:45:32.993546   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 11/60
	I0717 22:45:33.995604   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 12/60
	I0717 22:45:34.997115   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 13/60
	I0717 22:45:35.998737   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 14/60
	I0717 22:45:37.000683   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 15/60
	I0717 22:45:38.001959   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 16/60
	I0717 22:45:39.003435   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 17/60
	I0717 22:45:40.005547   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 18/60
	I0717 22:45:41.007065   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 19/60
	I0717 22:45:42.008677   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 20/60
	I0717 22:45:43.010541   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 21/60
	I0717 22:45:44.011864   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 22/60
	I0717 22:45:45.013709   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 23/60
	I0717 22:45:46.015555   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 24/60
	I0717 22:45:47.017234   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 25/60
	I0717 22:45:48.018727   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 26/60
	I0717 22:45:49.019954   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 27/60
	I0717 22:45:50.021538   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 28/60
	I0717 22:45:51.022947   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 29/60
	I0717 22:45:52.025022   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 30/60
	I0717 22:45:53.026553   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 31/60
	I0717 22:45:54.028381   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 32/60
	I0717 22:45:55.029763   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 33/60
	I0717 22:45:56.031398   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 34/60
	I0717 22:45:57.032812   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 35/60
	I0717 22:45:58.034186   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 36/60
	I0717 22:45:59.035606   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 37/60
	I0717 22:46:00.038016   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 38/60
	I0717 22:46:01.039384   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 39/60
	I0717 22:46:02.041085   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 40/60
	I0717 22:46:03.043632   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 41/60
	I0717 22:46:04.045278   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 42/60
	I0717 22:46:05.046895   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 43/60
	I0717 22:46:06.048382   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 44/60
	I0717 22:46:07.050243   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 45/60
	I0717 22:46:08.051552   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 46/60
	I0717 22:46:09.052942   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 47/60
	I0717 22:46:10.054302   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 48/60
	I0717 22:46:11.056116   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 49/60
	I0717 22:46:12.057870   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 50/60
	I0717 22:46:13.059391   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 51/60
	I0717 22:46:14.060870   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 52/60
	I0717 22:46:15.062205   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 53/60
	I0717 22:46:16.063664   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 54/60
	I0717 22:46:17.065289   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 55/60
	I0717 22:46:18.066741   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 56/60
	I0717 22:46:19.068051   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 57/60
	I0717 22:46:20.069507   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 58/60
	I0717 22:46:21.070903   53357 main.go:141] libmachine: (embed-certs-571296) Waiting for machine to stop 59/60
	I0717 22:46:22.072026   53357 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 22:46:22.072068   53357 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 22:46:22.073939   53357 out.go:177] 
	W0717 22:46:22.075321   53357 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 22:46:22.075336   53357 out.go:239] * 
	* 
	W0717 22:46:22.077960   53357 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 22:46:22.079340   53357 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-571296 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-571296 -n embed-certs-571296
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-571296 -n embed-certs-571296: exit status 3 (18.480789096s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:46:40.561884   54070 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.179:22: connect: no route to host
	E0717 22:46:40.561906   54070 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.179:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-571296" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-935524 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-935524 --alsologtostderr -v=3: exit status 82 (2m1.079800355s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-935524"  ...
	* Stopping node "no-preload-935524"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:45:00.700724   53564 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:45:00.700859   53564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:45:00.700870   53564 out.go:309] Setting ErrFile to fd 2...
	I0717 22:45:00.700877   53564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:45:00.701104   53564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:45:00.701404   53564 out.go:303] Setting JSON to false
	I0717 22:45:00.701541   53564 mustload.go:65] Loading cluster: no-preload-935524
	I0717 22:45:00.701883   53564 config.go:182] Loaded profile config "no-preload-935524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:45:00.702012   53564 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/config.json ...
	I0717 22:45:00.702196   53564 mustload.go:65] Loading cluster: no-preload-935524
	I0717 22:45:00.702334   53564 config.go:182] Loaded profile config "no-preload-935524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:45:00.702371   53564 stop.go:39] StopHost: no-preload-935524
	I0717 22:45:00.702798   53564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:45:00.702857   53564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:45:00.716932   53564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34631
	I0717 22:45:00.717336   53564 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:45:00.717976   53564 main.go:141] libmachine: Using API Version  1
	I0717 22:45:00.718001   53564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:45:00.718326   53564 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:45:00.721484   53564 out.go:177] * Stopping node "no-preload-935524"  ...
	I0717 22:45:00.723564   53564 main.go:141] libmachine: Stopping "no-preload-935524"...
	I0717 22:45:00.723584   53564 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:45:00.725253   53564 main.go:141] libmachine: (no-preload-935524) Calling .Stop
	I0717 22:45:00.729011   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 0/60
	I0717 22:45:01.730425   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 1/60
	I0717 22:45:02.732395   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 2/60
	I0717 22:45:03.733807   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 3/60
	I0717 22:45:04.736139   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 4/60
	I0717 22:45:05.738306   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 5/60
	I0717 22:45:06.739913   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 6/60
	I0717 22:45:07.741377   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 7/60
	I0717 22:45:08.742814   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 8/60
	I0717 22:45:09.744144   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 9/60
	I0717 22:45:10.746443   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 10/60
	I0717 22:45:11.747723   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 11/60
	I0717 22:45:12.749149   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 12/60
	I0717 22:45:13.750560   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 13/60
	I0717 22:45:14.751901   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 14/60
	I0717 22:45:15.753774   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 15/60
	I0717 22:45:16.756375   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 16/60
	I0717 22:45:17.757726   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 17/60
	I0717 22:45:18.759162   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 18/60
	I0717 22:45:19.760764   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 19/60
	I0717 22:45:20.762696   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 20/60
	I0717 22:45:21.764396   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 21/60
	I0717 22:45:22.766021   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 22/60
	I0717 22:45:23.767925   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 23/60
	I0717 22:45:24.769451   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 24/60
	I0717 22:45:25.771476   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 25/60
	I0717 22:45:26.773059   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 26/60
	I0717 22:45:27.774722   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 27/60
	I0717 22:45:28.776074   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 28/60
	I0717 22:45:29.777613   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 29/60
	I0717 22:45:30.779852   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 30/60
	I0717 22:45:31.781098   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 31/60
	I0717 22:45:32.782632   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 32/60
	I0717 22:45:33.783989   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 33/60
	I0717 22:45:34.785409   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 34/60
	I0717 22:45:35.787409   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 35/60
	I0717 22:45:36.789499   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 36/60
	I0717 22:45:37.790898   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 37/60
	I0717 22:45:38.792391   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 38/60
	I0717 22:45:39.793956   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 39/60
	I0717 22:45:40.796191   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 40/60
	I0717 22:45:41.797628   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 41/60
	I0717 22:45:42.799000   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 42/60
	I0717 22:45:43.800729   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 43/60
	I0717 22:45:44.802223   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 44/60
	I0717 22:45:45.804065   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 45/60
	I0717 22:45:46.805214   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 46/60
	I0717 22:45:47.806694   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 47/60
	I0717 22:45:48.808174   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 48/60
	I0717 22:45:49.809414   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 49/60
	I0717 22:45:50.811415   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 50/60
	I0717 22:45:51.812594   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 51/60
	I0717 22:45:52.814113   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 52/60
	I0717 22:45:53.815449   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 53/60
	I0717 22:45:54.817049   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 54/60
	I0717 22:45:55.818954   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 55/60
	I0717 22:45:56.820086   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 56/60
	I0717 22:45:57.821542   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 57/60
	I0717 22:45:58.823223   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 58/60
	I0717 22:45:59.824957   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 59/60
	I0717 22:46:00.826348   53564 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 22:46:00.826419   53564 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 22:46:00.826441   53564 retry.go:31] will retry after 780.812478ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 22:46:01.607404   53564 stop.go:39] StopHost: no-preload-935524
	I0717 22:46:01.607776   53564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:46:01.607810   53564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:46:01.622055   53564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44089
	I0717 22:46:01.622431   53564 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:46:01.623033   53564 main.go:141] libmachine: Using API Version  1
	I0717 22:46:01.623054   53564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:46:01.623337   53564 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:46:01.625543   53564 out.go:177] * Stopping node "no-preload-935524"  ...
	I0717 22:46:01.627027   53564 main.go:141] libmachine: Stopping "no-preload-935524"...
	I0717 22:46:01.627045   53564 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:46:01.628695   53564 main.go:141] libmachine: (no-preload-935524) Calling .Stop
	I0717 22:46:01.631841   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 0/60
	I0717 22:46:02.633193   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 1/60
	I0717 22:46:03.634689   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 2/60
	I0717 22:46:04.636191   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 3/60
	I0717 22:46:05.637698   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 4/60
	I0717 22:46:06.639218   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 5/60
	I0717 22:46:07.640630   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 6/60
	I0717 22:46:08.642093   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 7/60
	I0717 22:46:09.643461   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 8/60
	I0717 22:46:10.644949   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 9/60
	I0717 22:46:11.646759   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 10/60
	I0717 22:46:12.648253   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 11/60
	I0717 22:46:13.649667   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 12/60
	I0717 22:46:14.651073   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 13/60
	I0717 22:46:15.652433   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 14/60
	I0717 22:46:16.653977   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 15/60
	I0717 22:46:17.655420   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 16/60
	I0717 22:46:18.656804   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 17/60
	I0717 22:46:19.658418   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 18/60
	I0717 22:46:20.659819   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 19/60
	I0717 22:46:21.661317   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 20/60
	I0717 22:46:22.662709   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 21/60
	I0717 22:46:23.664121   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 22/60
	I0717 22:46:24.665404   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 23/60
	I0717 22:46:25.666823   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 24/60
	I0717 22:46:26.668473   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 25/60
	I0717 22:46:27.669857   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 26/60
	I0717 22:46:28.671367   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 27/60
	I0717 22:46:29.672736   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 28/60
	I0717 22:46:30.674346   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 29/60
	I0717 22:46:31.676117   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 30/60
	I0717 22:46:32.677506   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 31/60
	I0717 22:46:33.678984   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 32/60
	I0717 22:46:34.680339   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 33/60
	I0717 22:46:35.681704   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 34/60
	I0717 22:46:36.683462   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 35/60
	I0717 22:46:37.684996   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 36/60
	I0717 22:46:38.686450   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 37/60
	I0717 22:46:39.687942   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 38/60
	I0717 22:46:40.689230   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 39/60
	I0717 22:46:41.690876   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 40/60
	I0717 22:46:42.692263   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 41/60
	I0717 22:46:43.693885   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 42/60
	I0717 22:46:44.695279   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 43/60
	I0717 22:46:45.696785   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 44/60
	I0717 22:46:46.698250   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 45/60
	I0717 22:46:47.699559   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 46/60
	I0717 22:46:48.700931   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 47/60
	I0717 22:46:49.702322   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 48/60
	I0717 22:46:50.703909   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 49/60
	I0717 22:46:51.705812   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 50/60
	I0717 22:46:52.707471   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 51/60
	I0717 22:46:53.708921   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 52/60
	I0717 22:46:54.710566   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 53/60
	I0717 22:46:55.711917   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 54/60
	I0717 22:46:56.714070   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 55/60
	I0717 22:46:57.715512   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 56/60
	I0717 22:46:58.717092   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 57/60
	I0717 22:46:59.718404   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 58/60
	I0717 22:47:00.719683   53564 main.go:141] libmachine: (no-preload-935524) Waiting for machine to stop 59/60
	I0717 22:47:01.721118   53564 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 22:47:01.721160   53564 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 22:47:01.723455   53564 out.go:177] 
	W0717 22:47:01.724959   53564 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 22:47:01.724973   53564 out.go:239] * 
	* 
	W0717 22:47:01.727267   53564 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 22:47:01.728791   53564 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-935524 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-935524 -n no-preload-935524
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-935524 -n no-preload-935524: exit status 3 (18.510838241s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:47:20.241857   54294 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0717 22:47:20.241877   54294 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-935524" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (140.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-504828 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-504828 --alsologtostderr -v=3: exit status 82 (2m1.757843664s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-504828"  ...
	* Stopping node "default-k8s-diff-port-504828"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:45:04.923312   53673 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:45:04.923431   53673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:45:04.923439   53673 out.go:309] Setting ErrFile to fd 2...
	I0717 22:45:04.923443   53673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:45:04.923633   53673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:45:04.923859   53673 out.go:303] Setting JSON to false
	I0717 22:45:04.923950   53673 mustload.go:65] Loading cluster: default-k8s-diff-port-504828
	I0717 22:45:04.924320   53673 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:45:04.924416   53673 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/config.json ...
	I0717 22:45:04.924619   53673 mustload.go:65] Loading cluster: default-k8s-diff-port-504828
	I0717 22:45:04.924802   53673 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:45:04.924855   53673 stop.go:39] StopHost: default-k8s-diff-port-504828
	I0717 22:45:04.925377   53673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:45:04.925437   53673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:45:04.939597   53673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0717 22:45:04.940001   53673 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:45:04.940575   53673 main.go:141] libmachine: Using API Version  1
	I0717 22:45:04.940637   53673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:45:04.941015   53673 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:45:04.943616   53673 out.go:177] * Stopping node "default-k8s-diff-port-504828"  ...
	I0717 22:45:04.945645   53673 main.go:141] libmachine: Stopping "default-k8s-diff-port-504828"...
	I0717 22:45:04.945665   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:45:04.947325   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Stop
	I0717 22:45:04.950814   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 0/60
	I0717 22:45:05.952270   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 1/60
	I0717 22:45:06.954600   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 2/60
	I0717 22:45:07.955909   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 3/60
	I0717 22:45:08.957329   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 4/60
	I0717 22:45:09.959608   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 5/60
	I0717 22:45:10.961179   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 6/60
	I0717 22:45:11.962590   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 7/60
	I0717 22:45:12.964036   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 8/60
	I0717 22:45:13.965451   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 9/60
	I0717 22:45:14.967940   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 10/60
	I0717 22:45:15.969507   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 11/60
	I0717 22:45:16.970927   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 12/60
	I0717 22:45:17.972533   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 13/60
	I0717 22:45:18.973840   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 14/60
	I0717 22:45:19.975808   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 15/60
	I0717 22:45:20.977090   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 16/60
	I0717 22:45:21.977956   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 17/60
	I0717 22:45:22.979681   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 18/60
	I0717 22:45:23.980562   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 19/60
	I0717 22:45:24.982395   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 20/60
	I0717 22:45:25.983252   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 21/60
	I0717 22:45:26.984613   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 22/60
	I0717 22:45:27.986172   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 23/60
	I0717 22:45:28.987573   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 24/60
	I0717 22:45:29.989404   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 25/60
	I0717 22:45:30.991510   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 26/60
	I0717 22:45:31.992731   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 27/60
	I0717 22:45:32.994410   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 28/60
	I0717 22:45:33.996047   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 29/60
	I0717 22:45:34.997886   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 30/60
	I0717 22:45:35.999723   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 31/60
	I0717 22:45:37.001041   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 32/60
	I0717 22:45:38.002244   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 33/60
	I0717 22:45:39.003435   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 34/60
	I0717 22:45:40.005551   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 35/60
	I0717 22:45:41.006908   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 36/60
	I0717 22:45:42.008333   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 37/60
	I0717 22:45:43.009678   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 38/60
	I0717 22:45:44.011140   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 39/60
	I0717 22:45:45.013492   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 40/60
	I0717 22:45:46.014821   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 41/60
	I0717 22:45:47.016277   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 42/60
	I0717 22:45:48.018331   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 43/60
	I0717 22:45:49.019829   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 44/60
	I0717 22:45:50.021696   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 45/60
	I0717 22:45:51.023867   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 46/60
	I0717 22:45:52.025230   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 47/60
	I0717 22:45:53.027284   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 48/60
	I0717 22:45:54.028943   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 49/60
	I0717 22:45:55.030629   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 50/60
	I0717 22:45:56.031883   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 51/60
	I0717 22:45:57.033166   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 52/60
	I0717 22:45:58.034479   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 53/60
	I0717 22:45:59.035814   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 54/60
	I0717 22:46:00.037881   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 55/60
	I0717 22:46:01.039385   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 56/60
	I0717 22:46:02.040821   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 57/60
	I0717 22:46:03.042711   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 58/60
	I0717 22:46:04.044468   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 59/60
	I0717 22:46:05.044968   53673 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 22:46:05.045006   53673 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 22:46:05.045022   53673 retry.go:31] will retry after 1.469981749s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 22:46:06.515657   53673 stop.go:39] StopHost: default-k8s-diff-port-504828
	I0717 22:46:06.516013   53673 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:46:06.516070   53673 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:46:06.530370   53673 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44959
	I0717 22:46:06.530836   53673 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:46:06.531241   53673 main.go:141] libmachine: Using API Version  1
	I0717 22:46:06.531266   53673 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:46:06.531581   53673 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:46:06.533594   53673 out.go:177] * Stopping node "default-k8s-diff-port-504828"  ...
	I0717 22:46:06.534972   53673 main.go:141] libmachine: Stopping "default-k8s-diff-port-504828"...
	I0717 22:46:06.534990   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:46:06.536681   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Stop
	I0717 22:46:06.539974   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 0/60
	I0717 22:46:07.541432   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 1/60
	I0717 22:46:08.542800   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 2/60
	I0717 22:46:09.544075   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 3/60
	I0717 22:46:10.545356   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 4/60
	I0717 22:46:11.547234   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 5/60
	I0717 22:46:12.548602   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 6/60
	I0717 22:46:13.550120   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 7/60
	I0717 22:46:14.551935   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 8/60
	I0717 22:46:15.553971   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 9/60
	I0717 22:46:16.556084   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 10/60
	I0717 22:46:17.557625   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 11/60
	I0717 22:46:18.558861   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 12/60
	I0717 22:46:19.560570   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 13/60
	I0717 22:46:20.562227   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 14/60
	I0717 22:46:21.564529   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 15/60
	I0717 22:46:22.566270   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 16/60
	I0717 22:46:23.567596   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 17/60
	I0717 22:46:24.569123   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 18/60
	I0717 22:46:25.570658   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 19/60
	I0717 22:46:26.572364   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 20/60
	I0717 22:46:27.573889   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 21/60
	I0717 22:46:28.575196   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 22/60
	I0717 22:46:29.576965   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 23/60
	I0717 22:46:30.578597   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 24/60
	I0717 22:46:31.580600   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 25/60
	I0717 22:46:32.581986   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 26/60
	I0717 22:46:33.583608   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 27/60
	I0717 22:46:34.585082   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 28/60
	I0717 22:46:35.586548   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 29/60
	I0717 22:46:36.588359   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 30/60
	I0717 22:46:37.589896   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 31/60
	I0717 22:46:38.591487   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 32/60
	I0717 22:46:39.593016   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 33/60
	I0717 22:46:40.594420   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 34/60
	I0717 22:46:41.596347   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 35/60
	I0717 22:46:42.597830   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 36/60
	I0717 22:46:43.599229   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 37/60
	I0717 22:46:44.600590   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 38/60
	I0717 22:46:45.602010   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 39/60
	I0717 22:46:46.603919   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 40/60
	I0717 22:46:47.605247   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 41/60
	I0717 22:46:48.606580   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 42/60
	I0717 22:46:49.608016   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 43/60
	I0717 22:46:50.609889   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 44/60
	I0717 22:46:51.611527   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 45/60
	I0717 22:46:52.613084   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 46/60
	I0717 22:46:53.614533   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 47/60
	I0717 22:46:54.616181   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 48/60
	I0717 22:46:55.617552   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 49/60
	I0717 22:46:56.619423   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 50/60
	I0717 22:46:57.620888   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 51/60
	I0717 22:46:58.622185   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 52/60
	I0717 22:46:59.623536   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 53/60
	I0717 22:47:00.624977   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 54/60
	I0717 22:47:01.626966   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 55/60
	I0717 22:47:02.628327   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 56/60
	I0717 22:47:03.629707   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 57/60
	I0717 22:47:04.631911   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 58/60
	I0717 22:47:05.633259   53673 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for machine to stop 59/60
	I0717 22:47:06.634363   53673 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0717 22:47:06.634405   53673 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 22:47:06.636410   53673 out.go:177] 
	W0717 22:47:06.637765   53673 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 22:47:06.637778   53673 out.go:239] * 
	* 
	W0717 22:47:06.640033   53673 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 22:47:06.641399   53673 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-504828 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828
E0717 22:47:11.147851   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828: exit status 3 (18.46193274s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:47:25.105880   54334 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.118:22: connect: no route to host
	E0717 22:47:25.105907   54334 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.118:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-504828" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (140.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-332820 -n old-k8s-version-332820
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-332820 -n old-k8s-version-332820: exit status 3 (3.167921668s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:45:23.601833   53723 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.149:22: connect: no route to host
	E0717 22:45:23.601852   53723 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.149:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-332820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-332820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153244507s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.149:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-332820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-332820 -n old-k8s-version-332820
E0717 22:45:31.747466   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-332820 -n old-k8s-version-332820: exit status 3 (3.062164885s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:45:32.817851   53800 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.149:22: connect: no route to host
	E0717 22:45:32.817871   53800 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.149:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-332820" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-571296 -n embed-certs-571296
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-571296 -n embed-certs-571296: exit status 3 (3.16767346s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:46:43.729889   54137 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.179:22: connect: no route to host
	E0717 22:46:43.729914   54137 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.179:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-571296 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-571296 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153290778s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.179:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-571296 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-571296 -n embed-certs-571296
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-571296 -n embed-certs-571296: exit status 3 (3.063044098s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:46:52.945909   54208 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.179:22: connect: no route to host
	E0717 22:46:52.945933   54208 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.179:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-571296" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-935524 -n no-preload-935524
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-935524 -n no-preload-935524: exit status 3 (3.168120726s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:47:23.409865   54398 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0717 22:47:23.409885   54398 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-935524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-935524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153708982s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-935524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-935524 -n no-preload-935524
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-935524 -n no-preload-935524: exit status 3 (3.061983757s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:47:32.625914   54532 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host
	E0717 22:47:32.625939   54532 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.6:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-935524" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828
E0717 22:47:28.101703   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828: exit status 3 (3.167831952s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:47:28.273873   54473 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.118:22: connect: no route to host
	E0717 22:47:28.273899   54473 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.118:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-504828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-504828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153527138s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.118:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-504828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828: exit status 3 (3.062085955s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:47:37.489930   54609 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.118:22: connect: no route to host
	E0717 22:47:37.489949   54609 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.118:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-504828" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 22:57:28.101144   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 22:58:11.892614   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-935524 -n no-preload-935524
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-07-17 23:05:03.40355352 +0000 UTC m=+5067.193339520
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-935524 -n no-preload-935524
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-935524 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-935524 logs -n 25: (1.746365429s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-431736                                 | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-482945                                        | pause-482945                 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-366864                              | cert-expiration-366864       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| delete  | -p                                                     | disable-driver-mounts-615088 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	|         | disable-driver-mounts-615088                           |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-431736 sudo                            | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-431736                                 | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-332820        | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-571296            | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-935524             | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-504828  | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-332820             | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-571296                 | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 23:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-935524                  | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504828       | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 22:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 23:01 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:47:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:47:37.527061   54649 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:47:37.527212   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:47:37.527221   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 22:47:37.527228   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:47:37.527438   54649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:47:37.527980   54649 out.go:303] Setting JSON to false
	I0717 22:47:37.528901   54649 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9010,"bootTime":1689625048,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:47:37.528964   54649 start.go:138] virtualization: kvm guest
	I0717 22:47:37.531211   54649 out.go:177] * [default-k8s-diff-port-504828] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:47:37.533158   54649 notify.go:220] Checking for updates...
	I0717 22:47:37.533188   54649 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:47:37.535650   54649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:47:37.537120   54649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:47:37.538622   54649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:47:37.540087   54649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:47:37.541460   54649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:47:37.543023   54649 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:47:37.543367   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:47:37.543410   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:47:37.557812   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0717 22:47:37.558215   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:47:37.558854   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:47:37.558880   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:47:37.559209   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:47:37.559422   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:47:37.559654   54649 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:47:37.559930   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:47:37.559964   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:47:37.574919   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I0717 22:47:37.575395   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:47:37.575884   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:47:37.575907   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:47:37.576216   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:47:37.576373   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:47:37.609134   54649 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 22:47:37.610479   54649 start.go:298] selected driver: kvm2
	I0717 22:47:37.610497   54649 start.go:880] validating driver "kvm2" against &{Name:default-k8s-diff-port-504828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:def
ault-k8s-diff-port-504828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:47:37.610629   54649 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:47:37.611264   54649 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:37.611363   54649 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 22:47:37.626733   54649 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 22:47:37.627071   54649 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 22:47:37.627102   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:47:37.627113   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:47:37.627123   54649 start_flags.go:319] config:
	{Name:default-k8s-diff-port-504828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-504828 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:47:37.627251   54649 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:37.629965   54649 out.go:177] * Starting control plane node default-k8s-diff-port-504828 in cluster default-k8s-diff-port-504828
	I0717 22:47:32.766201   54573 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:47:32.766339   54573 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/config.json ...
	I0717 22:47:32.766467   54573 cache.go:107] acquiring lock: {Name:mk01bc74ef42cddd6cd05b75ec900cb2a05e15de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766476   54573 cache.go:107] acquiring lock: {Name:mk672b2225edd60ecd8aa8e076d6e3579923204f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766504   54573 cache.go:107] acquiring lock: {Name:mk1ec8b402c7d0685d25060e32c2f651eb2916fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766539   54573 cache.go:107] acquiring lock: {Name:mkd18484b6a11488d3306ab3200047f68a7be660 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766573   54573 start.go:365] acquiring machines lock for no-preload-935524: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:47:32.766576   54573 cache.go:107] acquiring lock: {Name:mkb3015efe537f010ace1f299991daca38e60845 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766610   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0717 22:47:32.766586   54573 cache.go:107] acquiring lock: {Name:mkc8c0d0fa55ce47999adb3e73b20a24cafac7c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766637   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 exists
	I0717 22:47:32.766653   54573 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0" took 100.155µs
	I0717 22:47:32.766659   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0717 22:47:32.766648   54573 cache.go:107] acquiring lock: {Name:mke2add190f322b938de65cf40269b08b3acfca3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766656   54573 cache.go:107] acquiring lock: {Name:mk075beefd466e66915afc5543af4c3b175d5d80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766681   54573 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 187.554µs
	I0717 22:47:32.766710   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0717 22:47:32.766670   54573 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0717 22:47:32.766735   54573 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1" took 88.679µs
	I0717 22:47:32.766748   54573 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0717 22:47:32.766629   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0717 22:47:32.766763   54573 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3" took 231.824µs
	I0717 22:47:32.766771   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0717 22:47:32.766717   54573 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0717 22:47:32.766570   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 22:47:32.766780   54573 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3" took 194.904µs
	I0717 22:47:32.766790   54573 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0717 22:47:32.766787   54573 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 329.218µs
	I0717 22:47:32.766631   54573 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3" took 161.864µs
	I0717 22:47:32.766805   54573 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0717 22:47:32.766774   54573 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0717 22:47:32.766672   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0717 22:47:32.766820   54573 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3" took 238.693µs
	I0717 22:47:32.766828   54573 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0717 22:47:32.766797   54573 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 22:47:32.766834   54573 cache.go:87] Successfully saved all images to host disk.
	I0717 22:47:37.631294   54649 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:47:37.631336   54649 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 22:47:37.631348   54649 cache.go:57] Caching tarball of preloaded images
	I0717 22:47:37.631442   54649 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 22:47:37.631456   54649 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 22:47:37.631555   54649 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/config.json ...
	I0717 22:47:37.631742   54649 start.go:365] acquiring machines lock for default-k8s-diff-port-504828: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:47:37.905723   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:40.977774   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:47.057804   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:50.129875   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:56.209815   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:59.281810   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:05.361786   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:08.433822   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:14.513834   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:17.585682   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:23.665811   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:26.737819   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:32.817800   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:35.889839   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:41.969818   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:45.041851   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:51.121816   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:54.193896   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:00.273812   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:03.345848   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:09.425796   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:12.497873   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:18.577847   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:21.649767   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:27.729823   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:30.801947   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:36.881840   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:39.953832   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:46.033825   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:49.105862   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:55.185814   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:58.257881   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:50:04.337852   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:50:07.409871   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:50:10.413979   54248 start.go:369] acquired machines lock for "embed-certs-571296" in 3m17.321305769s
	I0717 22:50:10.414028   54248 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:50:10.414048   54248 fix.go:54] fixHost starting: 
	I0717 22:50:10.414400   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:50:10.414437   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:50:10.428711   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0717 22:50:10.429132   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:50:10.429628   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:50:10.429671   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:50:10.430088   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:50:10.430301   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:10.430491   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:50:10.432357   54248 fix.go:102] recreateIfNeeded on embed-certs-571296: state=Stopped err=<nil>
	I0717 22:50:10.432375   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	W0717 22:50:10.432552   54248 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:50:10.434264   54248 out.go:177] * Restarting existing kvm2 VM for "embed-certs-571296" ...
	I0717 22:50:10.411622   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:50:10.411707   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:50:10.413827   53870 machine.go:91] provisioned docker machine in 4m37.430605556s
	I0717 22:50:10.413860   53870 fix.go:56] fixHost completed within 4m37.451042302s
	I0717 22:50:10.413870   53870 start.go:83] releasing machines lock for "old-k8s-version-332820", held for 4m37.451061598s
	W0717 22:50:10.413907   53870 start.go:672] error starting host: provision: host is not running
	W0717 22:50:10.414004   53870 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 22:50:10.414014   53870 start.go:687] Will try again in 5 seconds ...
	I0717 22:50:10.435984   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Start
	I0717 22:50:10.436181   54248 main.go:141] libmachine: (embed-certs-571296) Ensuring networks are active...
	I0717 22:50:10.436939   54248 main.go:141] libmachine: (embed-certs-571296) Ensuring network default is active
	I0717 22:50:10.437252   54248 main.go:141] libmachine: (embed-certs-571296) Ensuring network mk-embed-certs-571296 is active
	I0717 22:50:10.437751   54248 main.go:141] libmachine: (embed-certs-571296) Getting domain xml...
	I0717 22:50:10.438706   54248 main.go:141] libmachine: (embed-certs-571296) Creating domain...
	I0717 22:50:10.795037   54248 main.go:141] libmachine: (embed-certs-571296) Waiting to get IP...
	I0717 22:50:10.795808   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:10.796178   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:10.796237   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:10.796156   55063 retry.go:31] will retry after 189.390538ms: waiting for machine to come up
	I0717 22:50:10.987904   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:10.988435   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:10.988466   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:10.988382   55063 retry.go:31] will retry after 260.75291ms: waiting for machine to come up
	I0717 22:50:11.250849   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:11.251279   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:11.251323   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:11.251218   55063 retry.go:31] will retry after 421.317262ms: waiting for machine to come up
	I0717 22:50:11.673813   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:11.674239   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:11.674259   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:11.674206   55063 retry.go:31] will retry after 512.64366ms: waiting for machine to come up
	I0717 22:50:12.188810   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:12.189271   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:12.189298   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:12.189222   55063 retry.go:31] will retry after 489.02322ms: waiting for machine to come up
	I0717 22:50:12.679695   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:12.680108   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:12.680137   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:12.680012   55063 retry.go:31] will retry after 589.269905ms: waiting for machine to come up
	I0717 22:50:15.415915   53870 start.go:365] acquiring machines lock for old-k8s-version-332820: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:50:13.270668   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:13.271039   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:13.271069   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:13.270984   55063 retry.go:31] will retry after 722.873214ms: waiting for machine to come up
	I0717 22:50:13.996101   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:13.996681   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:13.996711   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:13.996623   55063 retry.go:31] will retry after 1.381840781s: waiting for machine to come up
	I0717 22:50:15.379777   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:15.380169   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:15.380197   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:15.380118   55063 retry.go:31] will retry after 1.335563851s: waiting for machine to come up
	I0717 22:50:16.718113   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:16.718637   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:16.718660   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:16.718575   55063 retry.go:31] will retry after 1.96500286s: waiting for machine to come up
	I0717 22:50:18.685570   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:18.686003   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:18.686023   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:18.685960   55063 retry.go:31] will retry after 2.007114073s: waiting for machine to come up
	I0717 22:50:20.694500   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:20.694961   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:20.694984   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:20.694916   55063 retry.go:31] will retry after 3.344996038s: waiting for machine to come up
	I0717 22:50:24.043423   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:24.043777   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:24.043799   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:24.043732   55063 retry.go:31] will retry after 3.031269711s: waiting for machine to come up
	I0717 22:50:27.077029   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:27.077447   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:27.077493   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:27.077379   55063 retry.go:31] will retry after 3.787872248s: waiting for machine to come up
	I0717 22:50:32.158403   54573 start.go:369] acquired machines lock for "no-preload-935524" in 2m59.391772757s
	I0717 22:50:32.158456   54573 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:50:32.158478   54573 fix.go:54] fixHost starting: 
	I0717 22:50:32.158917   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:50:32.158960   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:50:32.177532   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0717 22:50:32.177962   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:50:32.178564   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:50:32.178596   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:50:32.178981   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:50:32.179197   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:32.179381   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:50:32.181079   54573 fix.go:102] recreateIfNeeded on no-preload-935524: state=Stopped err=<nil>
	I0717 22:50:32.181104   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	W0717 22:50:32.181273   54573 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:50:32.183782   54573 out.go:177] * Restarting existing kvm2 VM for "no-preload-935524" ...
	I0717 22:50:32.185307   54573 main.go:141] libmachine: (no-preload-935524) Calling .Start
	I0717 22:50:32.185504   54573 main.go:141] libmachine: (no-preload-935524) Ensuring networks are active...
	I0717 22:50:32.186119   54573 main.go:141] libmachine: (no-preload-935524) Ensuring network default is active
	I0717 22:50:32.186543   54573 main.go:141] libmachine: (no-preload-935524) Ensuring network mk-no-preload-935524 is active
	I0717 22:50:32.186958   54573 main.go:141] libmachine: (no-preload-935524) Getting domain xml...
	I0717 22:50:32.187647   54573 main.go:141] libmachine: (no-preload-935524) Creating domain...
	I0717 22:50:32.567258   54573 main.go:141] libmachine: (no-preload-935524) Waiting to get IP...
	I0717 22:50:32.568423   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:32.568941   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:32.569021   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:32.568937   55160 retry.go:31] will retry after 239.368857ms: waiting for machine to come up
	I0717 22:50:30.866978   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.867476   54248 main.go:141] libmachine: (embed-certs-571296) Found IP for machine: 192.168.61.179
	I0717 22:50:30.867494   54248 main.go:141] libmachine: (embed-certs-571296) Reserving static IP address...
	I0717 22:50:30.867507   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has current primary IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.867958   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "embed-certs-571296", mac: "52:54:00:e0:4c:e5", ip: "192.168.61.179"} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.867994   54248 main.go:141] libmachine: (embed-certs-571296) Reserved static IP address: 192.168.61.179
	I0717 22:50:30.868012   54248 main.go:141] libmachine: (embed-certs-571296) DBG | skip adding static IP to network mk-embed-certs-571296 - found existing host DHCP lease matching {name: "embed-certs-571296", mac: "52:54:00:e0:4c:e5", ip: "192.168.61.179"}
	I0717 22:50:30.868034   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Getting to WaitForSSH function...
	I0717 22:50:30.868052   54248 main.go:141] libmachine: (embed-certs-571296) Waiting for SSH to be available...
	I0717 22:50:30.870054   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.870366   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.870402   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.870514   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Using SSH client type: external
	I0717 22:50:30.870545   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa (-rw-------)
	I0717 22:50:30.870596   54248 main.go:141] libmachine: (embed-certs-571296) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:50:30.870623   54248 main.go:141] libmachine: (embed-certs-571296) DBG | About to run SSH command:
	I0717 22:50:30.870637   54248 main.go:141] libmachine: (embed-certs-571296) DBG | exit 0
	I0717 22:50:30.965028   54248 main.go:141] libmachine: (embed-certs-571296) DBG | SSH cmd err, output: <nil>: 
	I0717 22:50:30.965413   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetConfigRaw
	I0717 22:50:30.966103   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:30.968689   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.969031   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.969068   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.969282   54248 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/config.json ...
	I0717 22:50:30.969474   54248 machine.go:88] provisioning docker machine ...
	I0717 22:50:30.969491   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:30.969725   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetMachineName
	I0717 22:50:30.969910   54248 buildroot.go:166] provisioning hostname "embed-certs-571296"
	I0717 22:50:30.969928   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetMachineName
	I0717 22:50:30.970057   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:30.972055   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.972390   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.972416   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.972590   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:30.972732   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:30.972851   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:30.973006   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:30.973150   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:30.973572   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:30.973586   54248 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-571296 && echo "embed-certs-571296" | sudo tee /etc/hostname
	I0717 22:50:31.119085   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-571296
	
	I0717 22:50:31.119112   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.121962   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.122254   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.122287   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.122439   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.122634   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.122824   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.122969   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.123140   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:31.123581   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:31.123607   54248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-571296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-571296/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-571296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:50:31.262347   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:50:31.262373   54248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:50:31.262422   54248 buildroot.go:174] setting up certificates
	I0717 22:50:31.262431   54248 provision.go:83] configureAuth start
	I0717 22:50:31.262443   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetMachineName
	I0717 22:50:31.262717   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:31.265157   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.265555   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.265582   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.265716   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.267966   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.268299   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.268334   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.268482   54248 provision.go:138] copyHostCerts
	I0717 22:50:31.268529   54248 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:50:31.268538   54248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:50:31.268602   54248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:50:31.268686   54248 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:50:31.268698   54248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:50:31.268720   54248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:50:31.268769   54248 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:50:31.268776   54248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:50:31.268794   54248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:50:31.268837   54248 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.embed-certs-571296 san=[192.168.61.179 192.168.61.179 localhost 127.0.0.1 minikube embed-certs-571296]
	I0717 22:50:31.374737   54248 provision.go:172] copyRemoteCerts
	I0717 22:50:31.374796   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:50:31.374818   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.377344   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.377664   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.377700   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.377873   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.378063   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.378223   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.378364   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:31.474176   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:50:31.498974   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 22:50:31.522794   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:50:31.546276   54248 provision.go:86] duration metric: configureAuth took 283.830107ms
	I0717 22:50:31.546313   54248 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:50:31.546521   54248 config.go:182] Loaded profile config "embed-certs-571296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:50:31.546603   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.549119   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.549485   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.549544   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.549716   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.549898   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.550056   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.550206   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.550376   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:31.550819   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:31.550837   54248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:50:31.884933   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:50:31.884960   54248 machine.go:91] provisioned docker machine in 915.473611ms
	I0717 22:50:31.884973   54248 start.go:300] post-start starting for "embed-certs-571296" (driver="kvm2")
	I0717 22:50:31.884985   54248 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:50:31.885011   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:31.885399   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:50:31.885444   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.887965   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.888302   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.888338   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.888504   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.888710   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.888862   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.888988   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:31.983951   54248 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:50:31.988220   54248 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:50:31.988248   54248 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:50:31.988334   54248 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:50:31.988429   54248 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:50:31.988543   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:50:31.997933   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:50:32.020327   54248 start.go:303] post-start completed in 135.337882ms
	I0717 22:50:32.020353   54248 fix.go:56] fixHost completed within 21.60630369s
	I0717 22:50:32.020377   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:32.023026   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.023382   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.023415   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.023665   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:32.023873   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.024047   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.024193   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:32.024348   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:32.024722   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:32.024734   54248 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:50:32.158218   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634232.105028258
	
	I0717 22:50:32.158252   54248 fix.go:206] guest clock: 1689634232.105028258
	I0717 22:50:32.158262   54248 fix.go:219] Guest: 2023-07-17 22:50:32.105028258 +0000 UTC Remote: 2023-07-17 22:50:32.020356843 +0000 UTC m=+219.067919578 (delta=84.671415ms)
	I0717 22:50:32.158286   54248 fix.go:190] guest clock delta is within tolerance: 84.671415ms
	I0717 22:50:32.158292   54248 start.go:83] releasing machines lock for "embed-certs-571296", held for 21.74428315s
	I0717 22:50:32.158327   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.158592   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:32.161034   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.161385   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.161418   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.161609   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.162089   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.162247   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.162322   54248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:50:32.162368   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:32.162453   54248 ssh_runner.go:195] Run: cat /version.json
	I0717 22:50:32.162474   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:32.165101   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165235   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165564   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.165591   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165615   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.165637   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165688   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:32.165806   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:32.165877   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.165995   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.166172   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:32.166181   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:32.166307   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:32.166363   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:32.285102   54248 ssh_runner.go:195] Run: systemctl --version
	I0717 22:50:32.291185   54248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:50:32.437104   54248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:50:32.443217   54248 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:50:32.443291   54248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:50:32.461161   54248 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:50:32.461181   54248 start.go:466] detecting cgroup driver to use...
	I0717 22:50:32.461237   54248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:50:32.483011   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:50:32.497725   54248 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:50:32.497788   54248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:50:32.512008   54248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:50:32.532595   54248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:50:32.654303   54248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:50:32.783140   54248 docker.go:212] disabling docker service ...
	I0717 22:50:32.783209   54248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:50:32.795822   54248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:50:32.809540   54248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:50:32.923229   54248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:50:33.025589   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:50:33.039420   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:50:33.056769   54248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:50:33.056831   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.066205   54248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:50:33.066277   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.075559   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.084911   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.094270   54248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:50:33.103819   54248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:50:33.112005   54248 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:50:33.112070   54248 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:50:33.125459   54248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:50:33.134481   54248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:50:33.240740   54248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:50:33.418504   54248 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:50:33.418576   54248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:50:33.424143   54248 start.go:534] Will wait 60s for crictl version
	I0717 22:50:33.424202   54248 ssh_runner.go:195] Run: which crictl
	I0717 22:50:33.428330   54248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:50:33.465318   54248 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:50:33.465403   54248 ssh_runner.go:195] Run: crio --version
	I0717 22:50:33.516467   54248 ssh_runner.go:195] Run: crio --version
	I0717 22:50:33.569398   54248 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:50:32.810512   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:32.811060   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:32.811095   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:32.810988   55160 retry.go:31] will retry after 309.941434ms: waiting for machine to come up
	I0717 22:50:33.122633   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:33.123092   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:33.123138   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:33.123046   55160 retry.go:31] will retry after 487.561142ms: waiting for machine to come up
	I0717 22:50:33.611932   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:33.612512   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:33.612542   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:33.612485   55160 retry.go:31] will retry after 367.897327ms: waiting for machine to come up
	I0717 22:50:33.981820   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:33.982279   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:33.982326   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:33.982214   55160 retry.go:31] will retry after 630.28168ms: waiting for machine to come up
	I0717 22:50:34.614129   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:34.614625   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:34.614665   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:34.614569   55160 retry.go:31] will retry after 677.033607ms: waiting for machine to come up
	I0717 22:50:35.292873   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:35.293409   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:35.293443   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:35.293360   55160 retry.go:31] will retry after 1.011969157s: waiting for machine to come up
	I0717 22:50:36.306452   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:36.306895   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:36.306924   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:36.306836   55160 retry.go:31] will retry after 1.035213701s: waiting for machine to come up
	I0717 22:50:37.343727   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:37.344195   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:37.344227   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:37.344143   55160 retry.go:31] will retry after 1.820372185s: waiting for machine to come up
	I0717 22:50:33.571037   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:33.574233   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:33.574758   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:33.574796   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:33.575014   54248 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 22:50:33.579342   54248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:50:33.591600   54248 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:50:33.591678   54248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:50:33.625951   54248 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:50:33.626026   54248 ssh_runner.go:195] Run: which lz4
	I0717 22:50:33.630581   54248 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:50:33.635135   54248 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:50:33.635171   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 22:50:35.389650   54248 crio.go:444] Took 1.759110 seconds to copy over tarball
	I0717 22:50:35.389728   54248 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:50:39.166682   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:39.167111   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:39.167146   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:39.167068   55160 retry.go:31] will retry after 1.739687633s: waiting for machine to come up
	I0717 22:50:40.909258   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:40.909752   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:40.909784   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:40.909694   55160 retry.go:31] will retry after 2.476966629s: waiting for machine to come up
	I0717 22:50:38.336151   54248 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946397065s)
	I0717 22:50:38.336176   54248 crio.go:451] Took 2.946502 seconds to extract the tarball
	I0717 22:50:38.336184   54248 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:50:38.375618   54248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:50:38.425357   54248 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:50:38.425377   54248 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:50:38.425449   54248 ssh_runner.go:195] Run: crio config
	I0717 22:50:38.511015   54248 cni.go:84] Creating CNI manager for ""
	I0717 22:50:38.511040   54248 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:50:38.511050   54248 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:50:38.511067   54248 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.179 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-571296 NodeName:embed-certs-571296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:50:38.511213   54248 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-571296"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:50:38.511287   54248 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-571296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-571296 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:50:38.511340   54248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:50:38.522373   54248 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:50:38.522432   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:50:38.532894   54248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0717 22:50:38.550814   54248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:50:38.567038   54248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0717 22:50:38.583844   54248 ssh_runner.go:195] Run: grep 192.168.61.179	control-plane.minikube.internal$ /etc/hosts
	I0717 22:50:38.587687   54248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:50:38.600458   54248 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296 for IP: 192.168.61.179
	I0717 22:50:38.600490   54248 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:50:38.600617   54248 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:50:38.600659   54248 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:50:38.600721   54248 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/client.key
	I0717 22:50:38.600774   54248 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/apiserver.key.1b57fe25
	I0717 22:50:38.600820   54248 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/proxy-client.key
	I0717 22:50:38.600929   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:50:38.600955   54248 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:50:38.600966   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:50:38.600986   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:50:38.601017   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:50:38.601050   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:50:38.601093   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:50:38.601734   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:50:38.627490   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:50:38.654423   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:50:38.682997   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 22:50:38.712432   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:50:38.742901   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:50:38.768966   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:50:38.794778   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:50:38.819537   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:50:38.846730   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:50:38.870806   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:50:38.894883   54248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:50:38.911642   54248 ssh_runner.go:195] Run: openssl version
	I0717 22:50:38.917551   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:50:38.928075   54248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:50:38.932832   54248 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:50:38.932888   54248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:50:38.938574   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:50:38.948446   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:50:38.958543   54248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:50:38.963637   54248 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:50:38.963687   54248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:50:38.969460   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:50:38.979718   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:50:38.989796   54248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:50:38.994721   54248 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:50:38.994779   54248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:50:39.000394   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:50:39.011176   54248 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:50:39.016792   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:50:39.022959   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:50:39.029052   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:50:39.035096   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:50:39.040890   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:50:39.047007   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:50:39.053316   54248 kubeadm.go:404] StartCluster: {Name:embed-certs-571296 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-571296 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:50:39.053429   54248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:50:39.053479   54248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:50:39.082896   54248 cri.go:89] found id: ""
	I0717 22:50:39.082981   54248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:50:39.092999   54248 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:50:39.093021   54248 kubeadm.go:636] restartCluster start
	I0717 22:50:39.093076   54248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:50:39.102254   54248 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:39.103361   54248 kubeconfig.go:92] found "embed-certs-571296" server: "https://192.168.61.179:8443"
	I0717 22:50:39.105846   54248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:50:39.114751   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:39.114825   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:39.125574   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:39.626315   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:39.626406   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:39.637943   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:40.126535   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:40.126643   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:40.139075   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:40.626167   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:40.626306   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:40.638180   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:41.125818   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:41.125919   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:41.137569   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:41.625798   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:41.625900   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:41.637416   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:42.125972   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:42.126076   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:42.137316   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:42.625866   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:42.625964   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:42.637524   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:43.388908   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:43.389400   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:43.389434   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:43.389373   55160 retry.go:31] will retry after 2.639442454s: waiting for machine to come up
	I0717 22:50:46.032050   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:46.032476   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:46.032510   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:46.032419   55160 retry.go:31] will retry after 2.750548097s: waiting for machine to come up
	I0717 22:50:43.126317   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:43.126425   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:43.137978   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:43.626637   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:43.626751   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:43.638260   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:44.125834   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:44.125922   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:44.136925   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:44.626547   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:44.626647   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:44.638426   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:45.125978   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:45.126061   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:45.137496   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:45.626448   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:45.626511   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:45.638236   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:46.125776   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:46.125849   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:46.137916   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:46.626561   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:46.626674   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:46.638555   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:47.126090   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:47.126210   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:47.138092   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:47.626721   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:47.626802   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:47.637828   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:48.785507   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:48.785955   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:48.785987   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:48.785912   55160 retry.go:31] will retry after 4.05132206s: waiting for machine to come up
	I0717 22:50:48.126359   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:48.126438   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:48.137826   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:48.626413   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:48.626507   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:48.638354   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:49.114916   54248 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:50:49.114971   54248 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:50:49.114981   54248 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:50:49.115054   54248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:50:49.149465   54248 cri.go:89] found id: ""
	I0717 22:50:49.149558   54248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:50:49.165197   54248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:50:49.174386   54248 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:50:49.174452   54248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:50:49.183137   54248 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:50:49.183162   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:49.294495   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.169663   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.373276   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.485690   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.551312   54248 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:50:50.551389   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:51.066760   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:51.566423   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:52.066949   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:52.566304   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:54.227701   54649 start.go:369] acquired machines lock for "default-k8s-diff-port-504828" in 3m16.595911739s
	I0717 22:50:54.227764   54649 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:50:54.227786   54649 fix.go:54] fixHost starting: 
	I0717 22:50:54.228206   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:50:54.228246   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:50:54.245721   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0717 22:50:54.246143   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:50:54.246746   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:50:54.246783   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:50:54.247139   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:50:54.247353   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:50:54.247512   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:50:54.249590   54649 fix.go:102] recreateIfNeeded on default-k8s-diff-port-504828: state=Stopped err=<nil>
	I0717 22:50:54.249630   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	W0717 22:50:54.249835   54649 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:50:54.251932   54649 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-504828" ...
	I0717 22:50:52.838478   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.839101   54573 main.go:141] libmachine: (no-preload-935524) Found IP for machine: 192.168.39.6
	I0717 22:50:52.839120   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has current primary IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.839129   54573 main.go:141] libmachine: (no-preload-935524) Reserving static IP address...
	I0717 22:50:52.839689   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "no-preload-935524", mac: "52:54:00:dc:7e:aa", ip: "192.168.39.6"} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.839724   54573 main.go:141] libmachine: (no-preload-935524) DBG | skip adding static IP to network mk-no-preload-935524 - found existing host DHCP lease matching {name: "no-preload-935524", mac: "52:54:00:dc:7e:aa", ip: "192.168.39.6"}
	I0717 22:50:52.839737   54573 main.go:141] libmachine: (no-preload-935524) Reserved static IP address: 192.168.39.6
	I0717 22:50:52.839752   54573 main.go:141] libmachine: (no-preload-935524) Waiting for SSH to be available...
	I0717 22:50:52.839769   54573 main.go:141] libmachine: (no-preload-935524) DBG | Getting to WaitForSSH function...
	I0717 22:50:52.842402   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.842739   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.842773   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.842861   54573 main.go:141] libmachine: (no-preload-935524) DBG | Using SSH client type: external
	I0717 22:50:52.842889   54573 main.go:141] libmachine: (no-preload-935524) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa (-rw-------)
	I0717 22:50:52.842929   54573 main.go:141] libmachine: (no-preload-935524) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:50:52.842947   54573 main.go:141] libmachine: (no-preload-935524) DBG | About to run SSH command:
	I0717 22:50:52.842962   54573 main.go:141] libmachine: (no-preload-935524) DBG | exit 0
	I0717 22:50:52.942283   54573 main.go:141] libmachine: (no-preload-935524) DBG | SSH cmd err, output: <nil>: 
	I0717 22:50:52.942665   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetConfigRaw
	I0717 22:50:52.943403   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:52.946152   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.946546   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.946587   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.946823   54573 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/config.json ...
	I0717 22:50:52.947043   54573 machine.go:88] provisioning docker machine ...
	I0717 22:50:52.947062   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:52.947259   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetMachineName
	I0717 22:50:52.947411   54573 buildroot.go:166] provisioning hostname "no-preload-935524"
	I0717 22:50:52.947431   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetMachineName
	I0717 22:50:52.947556   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:52.950010   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.950364   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.950394   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.950539   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:52.950709   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:52.950849   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:52.950980   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:52.951165   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:52.951809   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:52.951831   54573 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-935524 && echo "no-preload-935524" | sudo tee /etc/hostname
	I0717 22:50:53.102629   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-935524
	
	I0717 22:50:53.102665   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.105306   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.105689   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.105724   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.105856   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.106048   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.106219   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.106362   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.106504   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:53.106886   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:53.106904   54573 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-935524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-935524/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-935524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:50:53.250601   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:50:53.250631   54573 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:50:53.250711   54573 buildroot.go:174] setting up certificates
	I0717 22:50:53.250721   54573 provision.go:83] configureAuth start
	I0717 22:50:53.250735   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetMachineName
	I0717 22:50:53.251063   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:53.253864   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.254309   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.254344   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.254513   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.256938   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.257385   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.257429   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.257534   54573 provision.go:138] copyHostCerts
	I0717 22:50:53.257595   54573 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:50:53.257607   54573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:50:53.257682   54573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:50:53.257804   54573 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:50:53.257816   54573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:50:53.257843   54573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:50:53.257929   54573 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:50:53.257938   54573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:50:53.257964   54573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:50:53.258060   54573 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.no-preload-935524 san=[192.168.39.6 192.168.39.6 localhost 127.0.0.1 minikube no-preload-935524]
	I0717 22:50:53.392234   54573 provision.go:172] copyRemoteCerts
	I0717 22:50:53.392307   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:50:53.392335   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.395139   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.395529   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.395560   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.395734   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.395932   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.396109   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.396268   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:53.495214   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:50:53.523550   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 22:50:53.552276   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:50:53.576026   54573 provision.go:86] duration metric: configureAuth took 325.291158ms
	I0717 22:50:53.576057   54573 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:50:53.576313   54573 config.go:182] Loaded profile config "no-preload-935524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:50:53.576414   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.578969   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.579363   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.579404   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.579585   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.579783   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.579943   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.580113   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.580302   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:53.580952   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:53.580979   54573 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:50:53.948696   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:50:53.948725   54573 machine.go:91] provisioned docker machine in 1.001666705s
	I0717 22:50:53.948737   54573 start.go:300] post-start starting for "no-preload-935524" (driver="kvm2")
	I0717 22:50:53.948756   54573 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:50:53.948788   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:53.949144   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:50:53.949179   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.951786   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.952221   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.952255   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.952468   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.952642   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.952863   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.953001   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:54.054995   54573 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:50:54.060431   54573 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:50:54.060455   54573 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:50:54.060524   54573 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:50:54.060624   54573 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:50:54.060737   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:50:54.072249   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:50:54.094894   54573 start.go:303] post-start completed in 146.143243ms
	I0717 22:50:54.094919   54573 fix.go:56] fixHost completed within 21.936441056s
	I0717 22:50:54.094937   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:54.097560   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.097893   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.097926   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.098153   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:54.098377   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.098561   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.098729   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:54.098899   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:54.099308   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:54.099323   54573 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:50:54.227537   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634254.168158155
	
	I0717 22:50:54.227562   54573 fix.go:206] guest clock: 1689634254.168158155
	I0717 22:50:54.227573   54573 fix.go:219] Guest: 2023-07-17 22:50:54.168158155 +0000 UTC Remote: 2023-07-17 22:50:54.094922973 +0000 UTC m=+201.463147612 (delta=73.235182ms)
	I0717 22:50:54.227598   54573 fix.go:190] guest clock delta is within tolerance: 73.235182ms
	I0717 22:50:54.227604   54573 start.go:83] releasing machines lock for "no-preload-935524", held for 22.06917115s
	I0717 22:50:54.227636   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.227891   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:54.230831   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.231223   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.231262   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.231367   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.231932   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.232109   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.232181   54573 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:50:54.232226   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:54.232322   54573 ssh_runner.go:195] Run: cat /version.json
	I0717 22:50:54.232354   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:54.235001   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235351   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235429   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.235463   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235600   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:54.235791   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.235825   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.235857   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235969   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:54.236027   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:54.236119   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.236253   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:54.236254   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:54.236392   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:54.360160   54573 ssh_runner.go:195] Run: systemctl --version
	I0717 22:50:54.367093   54573 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:50:54.523956   54573 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:50:54.531005   54573 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:50:54.531121   54573 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:50:54.548669   54573 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:50:54.548697   54573 start.go:466] detecting cgroup driver to use...
	I0717 22:50:54.548768   54573 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:50:54.564722   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:50:54.577237   54573 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:50:54.577303   54573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:50:54.590625   54573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:50:54.603897   54573 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:50:54.731958   54573 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:50:54.862565   54573 docker.go:212] disabling docker service ...
	I0717 22:50:54.862632   54573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:50:54.875946   54573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:50:54.888617   54573 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:50:54.997410   54573 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:50:55.110094   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:50:55.123729   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:50:55.144670   54573 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:50:55.144754   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.154131   54573 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:50:55.154193   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.164669   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.177189   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.189292   54573 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:50:55.204022   54573 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:50:55.212942   54573 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:50:55.213006   54573 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:50:55.232951   54573 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:50:55.246347   54573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:50:55.366491   54573 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:50:55.544250   54573 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:50:55.544336   54573 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:50:55.550952   54573 start.go:534] Will wait 60s for crictl version
	I0717 22:50:55.551021   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:55.558527   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:50:55.602591   54573 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:50:55.602687   54573 ssh_runner.go:195] Run: crio --version
	I0717 22:50:55.663719   54573 ssh_runner.go:195] Run: crio --version
	I0717 22:50:55.726644   54573 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:50:54.253440   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Start
	I0717 22:50:54.253678   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Ensuring networks are active...
	I0717 22:50:54.254444   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Ensuring network default is active
	I0717 22:50:54.254861   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Ensuring network mk-default-k8s-diff-port-504828 is active
	I0717 22:50:54.255337   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Getting domain xml...
	I0717 22:50:54.256194   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Creating domain...
	I0717 22:50:54.643844   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting to get IP...
	I0717 22:50:54.644894   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.645362   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.645465   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:54.645359   55310 retry.go:31] will retry after 296.655364ms: waiting for machine to come up
	I0717 22:50:54.943927   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.944465   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.944500   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:54.944408   55310 retry.go:31] will retry after 351.801959ms: waiting for machine to come up
	I0717 22:50:55.298164   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.298678   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.298710   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:55.298642   55310 retry.go:31] will retry after 354.726659ms: waiting for machine to come up
	I0717 22:50:55.655122   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.655582   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.655710   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:55.655633   55310 retry.go:31] will retry after 540.353024ms: waiting for machine to come up
	I0717 22:50:56.197370   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.197929   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.197963   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:56.197897   55310 retry.go:31] will retry after 602.667606ms: waiting for machine to come up
	I0717 22:50:56.802746   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.803401   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.803431   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:56.803344   55310 retry.go:31] will retry after 675.557445ms: waiting for machine to come up
	I0717 22:50:57.480002   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:57.480476   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:57.480508   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:57.480423   55310 retry.go:31] will retry after 898.307594ms: waiting for machine to come up
	I0717 22:50:55.728247   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:55.731423   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:55.731871   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:55.731910   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:55.732109   54573 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 22:50:55.736921   54573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:50:55.751844   54573 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:50:55.751895   54573 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:50:55.787286   54573 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:50:55.787316   54573 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 22:50:55.787387   54573 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:55.787398   54573 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:55.787418   54573 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:55.787450   54573 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:55.787589   54573 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:55.787602   54573 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0717 22:50:55.787630   54573 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:55.787648   54573 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:55.788865   54573 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:55.788870   54573 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0717 22:50:55.788875   54573 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:55.788919   54573 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:55.788929   54573 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:55.788869   54573 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:55.788955   54573 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:55.789279   54573 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:55.956462   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:55.959183   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:55.960353   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:55.961871   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:55.963472   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0717 22:50:55.970739   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:55.992476   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:56.099305   54573 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0717 22:50:56.099353   54573 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:56.099399   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.144906   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:56.175359   54573 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0717 22:50:56.175407   54573 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:56.175409   54573 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0717 22:50:56.175444   54573 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:56.175508   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.175549   54573 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0717 22:50:56.175452   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.175577   54573 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:56.175622   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.205829   54573 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0717 22:50:56.205877   54573 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:56.205929   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.205962   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:56.205875   54573 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0717 22:50:56.206017   54573 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:56.206039   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.230299   54573 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 22:50:56.230358   54573 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:56.230406   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.230508   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:56.230526   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:56.230585   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:56.230619   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:56.280737   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:56.280740   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0717 22:50:56.280876   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 22:50:56.346096   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0717 22:50:56.346185   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0717 22:50:56.346213   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0717 22:50:56.346257   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0717 22:50:56.346281   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0717 22:50:56.346325   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:56.346360   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0717 22:50:56.346370   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 22:50:56.346409   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 22:50:56.361471   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0717 22:50:56.361511   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0717 22:50:56.361546   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 22:50:56.361605   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 22:50:56.361606   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 22:50:56.410058   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 22:50:56.410140   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0717 22:50:56.410177   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:50:56.410222   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0717 22:50:56.410317   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0717 22:50:56.410389   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0717 22:50:53.066719   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:53.096978   54248 api_server.go:72] duration metric: took 2.545662837s to wait for apiserver process to appear ...
	I0717 22:50:53.097002   54248 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:50:53.097021   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:57.043968   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:50:57.044010   54248 api_server.go:103] status: https://192.168.61.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:50:57.544722   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:57.550687   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:50:57.550718   54248 api_server.go:103] status: https://192.168.61.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:50:58.045135   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:58.058934   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:50:58.058970   54248 api_server.go:103] status: https://192.168.61.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:50:58.544766   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:58.550628   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 200:
	ok
	I0717 22:50:58.559879   54248 api_server.go:141] control plane version: v1.27.3
	I0717 22:50:58.559912   54248 api_server.go:131] duration metric: took 5.462902985s to wait for apiserver health ...
	I0717 22:50:58.559925   54248 cni.go:84] Creating CNI manager for ""
	I0717 22:50:58.559936   54248 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:50:58.605706   54248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:50:58.380501   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:58.380825   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:58.380842   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:58.380780   55310 retry.go:31] will retry after 1.23430246s: waiting for machine to come up
	I0717 22:50:59.617145   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:59.617808   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:59.617841   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:59.617730   55310 retry.go:31] will retry after 1.214374623s: waiting for machine to come up
	I0717 22:51:00.834129   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:00.834639   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:00.834680   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:00.834594   55310 retry.go:31] will retry after 1.950432239s: waiting for machine to come up
	I0717 22:50:58.680414   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (2.318705948s)
	I0717 22:50:58.680448   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0717 22:50:58.680485   54573 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3: (2.318846109s)
	I0717 22:50:58.680525   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0717 22:50:58.680548   54573 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.270351678s)
	I0717 22:50:58.680595   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 22:50:58.680614   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 22:50:58.680674   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 22:51:01.356090   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (2.675377242s)
	I0717 22:51:01.356124   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0717 22:51:01.356174   54573 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0717 22:51:01.356232   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0717 22:50:58.607184   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:50:58.656720   54248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:50:58.740705   54248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:50:58.760487   54248 system_pods.go:59] 8 kube-system pods found
	I0717 22:50:58.760530   54248 system_pods.go:61] "coredns-5d78c9869d-pwd8q" [f8079ab4-1d34-4847-bdb9-7d0a500ed732] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:50:58.760542   54248 system_pods.go:61] "etcd-embed-certs-571296" [e2a4f2bb-a767-484f-9339-7024168bb59d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:50:58.760553   54248 system_pods.go:61] "kube-apiserver-embed-certs-571296" [313d49ba-2814-49e7-8b97-9c278fd33686] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:50:58.760600   54248 system_pods.go:61] "kube-controller-manager-embed-certs-571296" [03ede9e6-f06a-45a2-bafc-0ae24db96be8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:50:58.760720   54248 system_pods.go:61] "kube-proxy-kpt5d" [109fb9ce-61ab-46b0-aaf8-478d61c16fe9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:50:58.760754   54248 system_pods.go:61] "kube-scheduler-embed-certs-571296" [a10941b1-ac81-4224-bc9e-89228ad3d5c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:50:58.760765   54248 system_pods.go:61] "metrics-server-74d5c6b9c-jl7jl" [251ed989-12c1-49e5-bec1-114c3548c8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:50:58.760784   54248 system_pods.go:61] "storage-provisioner" [fb7f6371-8788-4037-8eaf-6dc2189102ec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 22:50:58.760795   54248 system_pods.go:74] duration metric: took 20.068616ms to wait for pod list to return data ...
	I0717 22:50:58.760807   54248 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:50:58.777293   54248 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:50:58.777328   54248 node_conditions.go:123] node cpu capacity is 2
	I0717 22:50:58.777343   54248 node_conditions.go:105] duration metric: took 16.528777ms to run NodePressure ...
	I0717 22:50:58.777364   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:59.270627   54248 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:50:59.277045   54248 kubeadm.go:787] kubelet initialised
	I0717 22:50:59.277074   54248 kubeadm.go:788] duration metric: took 6.413321ms waiting for restarted kubelet to initialise ...
	I0717 22:50:59.277083   54248 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:50:59.285338   54248 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:01.304495   54248 pod_ready.go:102] pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:02.787568   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:02.788090   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:02.788118   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:02.788031   55310 retry.go:31] will retry after 2.897894179s: waiting for machine to come up
	I0717 22:51:05.687387   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:05.687774   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:05.687816   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:05.687724   55310 retry.go:31] will retry after 3.029953032s: waiting for machine to come up
	I0717 22:51:02.822684   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.466424442s)
	I0717 22:51:02.822717   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0717 22:51:02.822741   54573 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0717 22:51:02.822790   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0717 22:51:03.306481   54248 pod_ready.go:102] pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:04.302530   54248 pod_ready.go:92] pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:04.302560   54248 pod_ready.go:81] duration metric: took 5.01718551s waiting for pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:04.302573   54248 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:06.320075   54248 pod_ready.go:102] pod "etcd-embed-certs-571296" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:08.719593   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:08.720084   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:08.720116   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:08.720015   55310 retry.go:31] will retry after 3.646843477s: waiting for machine to come up
	I0717 22:51:12.370696   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.371189   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Found IP for machine: 192.168.72.118
	I0717 22:51:12.371225   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has current primary IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.371237   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Reserving static IP address...
	I0717 22:51:12.371698   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-504828", mac: "52:54:00:28:6f:f7", ip: "192.168.72.118"} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.371729   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Reserved static IP address: 192.168.72.118
	I0717 22:51:12.371747   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | skip adding static IP to network mk-default-k8s-diff-port-504828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-504828", mac: "52:54:00:28:6f:f7", ip: "192.168.72.118"}
	I0717 22:51:12.371759   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for SSH to be available...
	I0717 22:51:12.371774   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Getting to WaitForSSH function...
	I0717 22:51:12.374416   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.374804   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.374839   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.374958   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Using SSH client type: external
	I0717 22:51:12.375000   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa (-rw-------)
	I0717 22:51:12.375056   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:51:12.375078   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | About to run SSH command:
	I0717 22:51:12.375103   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | exit 0
	I0717 22:51:12.461844   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | SSH cmd err, output: <nil>: 
	I0717 22:51:12.462190   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetConfigRaw
	I0717 22:51:12.462878   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:12.465698   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.466129   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.466171   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.466432   54649 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/config.json ...
	I0717 22:51:12.466686   54649 machine.go:88] provisioning docker machine ...
	I0717 22:51:12.466713   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:12.466932   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetMachineName
	I0717 22:51:12.467149   54649 buildroot.go:166] provisioning hostname "default-k8s-diff-port-504828"
	I0717 22:51:12.467174   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetMachineName
	I0717 22:51:12.467336   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.469892   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.470309   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.470347   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.470539   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:12.470711   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.470906   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.471075   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:12.471251   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:12.471709   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:12.471728   54649 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-504828 && echo "default-k8s-diff-port-504828" | sudo tee /etc/hostname
	I0717 22:51:10.226119   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.403300342s)
	I0717 22:51:10.226147   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0717 22:51:10.226176   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 22:51:10.226231   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 22:51:12.580664   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.354394197s)
	I0717 22:51:12.580698   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0717 22:51:12.580729   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 22:51:12.580786   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 22:51:08.320182   54248 pod_ready.go:92] pod "etcd-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:08.320212   54248 pod_ready.go:81] duration metric: took 4.017631268s waiting for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:08.320225   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:08.327865   54248 pod_ready.go:92] pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:08.327901   54248 pod_ready.go:81] duration metric: took 7.613771ms waiting for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:08.327916   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:10.343489   54248 pod_ready.go:102] pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:11.344309   54248 pod_ready.go:92] pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:11.344328   54248 pod_ready.go:81] duration metric: took 3.016404448s waiting for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.344338   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kpt5d" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.353150   54248 pod_ready.go:92] pod "kube-proxy-kpt5d" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:11.353174   54248 pod_ready.go:81] duration metric: took 8.829647ms waiting for pod "kube-proxy-kpt5d" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.353183   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.360223   54248 pod_ready.go:92] pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:11.360242   54248 pod_ready.go:81] duration metric: took 7.0537ms waiting for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.360251   54248 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:13.630627   53870 start.go:369] acquired machines lock for "old-k8s-version-332820" in 58.214644858s
	I0717 22:51:13.630698   53870 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:51:13.630705   53870 fix.go:54] fixHost starting: 
	I0717 22:51:13.631117   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:13.631153   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:13.651676   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38349
	I0717 22:51:13.652152   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:13.652820   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:51:13.652841   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:13.653180   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:13.653679   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:13.653832   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:51:13.656911   53870 fix.go:102] recreateIfNeeded on old-k8s-version-332820: state=Stopped err=<nil>
	I0717 22:51:13.656944   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	W0717 22:51:13.657151   53870 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:51:13.659194   53870 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-332820" ...
	I0717 22:51:12.607198   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504828
	
	I0717 22:51:12.607256   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.610564   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.611073   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.611139   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.611470   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:12.611707   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.611918   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.612080   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:12.612267   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:12.612863   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:12.612897   54649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-504828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-504828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-504828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:51:12.749133   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:51:12.749159   54649 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:51:12.749187   54649 buildroot.go:174] setting up certificates
	I0717 22:51:12.749198   54649 provision.go:83] configureAuth start
	I0717 22:51:12.749211   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetMachineName
	I0717 22:51:12.749475   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:12.752199   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.752608   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.752637   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.752753   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.754758   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.755095   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.755142   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.755255   54649 provision.go:138] copyHostCerts
	I0717 22:51:12.755313   54649 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:51:12.755328   54649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:51:12.755393   54649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:51:12.755503   54649 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:51:12.755516   54649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:51:12.755547   54649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:51:12.755615   54649 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:51:12.755626   54649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:51:12.755649   54649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:51:12.755708   54649 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-504828 san=[192.168.72.118 192.168.72.118 localhost 127.0.0.1 minikube default-k8s-diff-port-504828]
	I0717 22:51:12.865920   54649 provision.go:172] copyRemoteCerts
	I0717 22:51:12.865978   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:51:12.865998   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.868784   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.869162   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.869196   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.869354   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:12.869551   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.869731   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:12.869864   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:12.963734   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:51:12.988925   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 22:51:13.014007   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:51:13.037974   54649 provision.go:86] duration metric: configureAuth took 288.764872ms
	I0717 22:51:13.038002   54649 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:51:13.038226   54649 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:51:13.038298   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.041038   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.041510   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.041560   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.041722   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.041928   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.042115   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.042265   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.042462   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:13.042862   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:13.042883   54649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:51:13.359789   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:51:13.359856   54649 machine.go:91] provisioned docker machine in 893.152202ms
	I0717 22:51:13.359873   54649 start.go:300] post-start starting for "default-k8s-diff-port-504828" (driver="kvm2")
	I0717 22:51:13.359885   54649 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:51:13.359909   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.360286   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:51:13.360322   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.363265   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.363637   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.363668   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.363953   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.364165   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.364336   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.364484   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:13.456030   54649 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:51:13.460504   54649 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:51:13.460539   54649 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:51:13.460610   54649 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:51:13.460711   54649 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:51:13.460824   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:51:13.469442   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:13.497122   54649 start.go:303] post-start completed in 137.230872ms
	I0717 22:51:13.497150   54649 fix.go:56] fixHost completed within 19.269364226s
	I0717 22:51:13.497196   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.500248   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.500673   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.500721   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.500872   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.501093   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.501256   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.501434   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.501602   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:13.502063   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:13.502080   54649 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:51:13.630454   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634273.570672552
	
	I0717 22:51:13.630476   54649 fix.go:206] guest clock: 1689634273.570672552
	I0717 22:51:13.630486   54649 fix.go:219] Guest: 2023-07-17 22:51:13.570672552 +0000 UTC Remote: 2023-07-17 22:51:13.49715425 +0000 UTC m=+216.001835933 (delta=73.518302ms)
	I0717 22:51:13.630534   54649 fix.go:190] guest clock delta is within tolerance: 73.518302ms
	I0717 22:51:13.630541   54649 start.go:83] releasing machines lock for "default-k8s-diff-port-504828", held for 19.402800296s
	I0717 22:51:13.630571   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.630804   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:13.633831   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.634285   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.634329   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.634496   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.635108   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.635324   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.635440   54649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:51:13.635513   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.635563   54649 ssh_runner.go:195] Run: cat /version.json
	I0717 22:51:13.635590   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.638872   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639085   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639277   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.639313   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639513   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.639711   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.639730   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.639769   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639930   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.639966   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.640133   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:13.640149   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.640293   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.640432   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:13.732117   54649 ssh_runner.go:195] Run: systemctl --version
	I0717 22:51:13.762073   54649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:51:13.920611   54649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:51:13.927492   54649 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:51:13.927552   54649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:51:13.943359   54649 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:51:13.943384   54649 start.go:466] detecting cgroup driver to use...
	I0717 22:51:13.943456   54649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:51:13.959123   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:51:13.974812   54649 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:51:13.974875   54649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:51:13.991292   54649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:51:14.006999   54649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:51:14.116763   54649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:51:14.286675   54649 docker.go:212] disabling docker service ...
	I0717 22:51:14.286747   54649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:51:14.304879   54649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:51:14.319280   54649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:51:14.436994   54649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:51:14.551392   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:51:14.564944   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:51:14.588553   54649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:51:14.588618   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.602482   54649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:51:14.602561   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.613901   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.624520   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.634941   54649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:51:14.649124   54649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:51:14.659103   54649 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:51:14.659194   54649 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:51:14.673064   54649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:51:14.684547   54649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:51:14.796698   54649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:51:15.013266   54649 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:51:15.013352   54649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:51:15.019638   54649 start.go:534] Will wait 60s for crictl version
	I0717 22:51:15.019707   54649 ssh_runner.go:195] Run: which crictl
	I0717 22:51:15.023691   54649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:51:15.079550   54649 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:51:15.079642   54649 ssh_runner.go:195] Run: crio --version
	I0717 22:51:15.149137   54649 ssh_runner.go:195] Run: crio --version
	I0717 22:51:15.210171   54649 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:51:15.211641   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:15.214746   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:15.215160   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:15.215195   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:15.215444   54649 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 22:51:15.220209   54649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:15.233265   54649 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:51:15.233336   54649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:15.278849   54649 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:51:15.278928   54649 ssh_runner.go:195] Run: which lz4
	I0717 22:51:15.284618   54649 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:51:15.289979   54649 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:51:15.290021   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 22:51:17.240790   54649 crio.go:444] Took 1.956220 seconds to copy over tarball
	I0717 22:51:17.240850   54649 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:51:14.577167   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (1.996354374s)
	I0717 22:51:14.577200   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0717 22:51:14.577239   54573 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:51:14.577288   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:51:15.749388   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.172071962s)
	I0717 22:51:15.749419   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 22:51:15.749442   54573 cache_images.go:123] Successfully loaded all cached images
	I0717 22:51:15.749448   54573 cache_images.go:92] LoadImages completed in 19.962118423s
	I0717 22:51:15.749548   54573 ssh_runner.go:195] Run: crio config
	I0717 22:51:15.830341   54573 cni.go:84] Creating CNI manager for ""
	I0717 22:51:15.830380   54573 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:15.830394   54573 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:51:15.830416   54573 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-935524 NodeName:no-preload-935524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:51:15.830609   54573 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-935524"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:51:15.830710   54573 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-935524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-935524 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:51:15.830777   54573 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:51:15.844785   54573 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:51:15.844854   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:51:15.859135   54573 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0717 22:51:15.884350   54573 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:51:15.904410   54573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0717 22:51:15.930959   54573 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0717 22:51:15.937680   54573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:15.960124   54573 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524 for IP: 192.168.39.6
	I0717 22:51:15.960169   54573 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:15.960352   54573 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:51:15.960416   54573 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:51:15.960539   54573 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.key
	I0717 22:51:15.960635   54573 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/apiserver.key.cc3bd7a5
	I0717 22:51:15.960694   54573 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/proxy-client.key
	I0717 22:51:15.960842   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:51:15.960882   54573 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:51:15.960899   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:51:15.960936   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:51:15.960973   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:51:15.961001   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:51:15.961063   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:15.961864   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:51:16.000246   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:51:16.036739   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:51:16.073916   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:51:16.110871   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:51:16.147671   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:51:16.183503   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:51:16.216441   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:51:16.251053   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:51:16.291022   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:51:16.327764   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:51:16.360870   54573 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:51:16.399760   54573 ssh_runner.go:195] Run: openssl version
	I0717 22:51:16.407720   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:51:16.423038   54573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:16.430870   54573 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:16.430933   54573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:16.441206   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:51:16.455708   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:51:16.470036   54573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:51:16.477133   54573 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:51:16.477206   54573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:51:16.485309   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:51:16.503973   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:51:16.524430   54573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:51:16.533991   54573 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:51:16.534052   54573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:51:16.544688   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:51:16.563847   54573 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:51:16.572122   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:51:16.583217   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:51:16.594130   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:51:16.606268   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:51:16.618166   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:51:16.628424   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:51:16.636407   54573 kubeadm.go:404] StartCluster: {Name:no-preload-935524 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-935524 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:51:16.636531   54573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:51:16.636616   54573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:16.677023   54573 cri.go:89] found id: ""
	I0717 22:51:16.677096   54573 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:51:16.691214   54573 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:51:16.691243   54573 kubeadm.go:636] restartCluster start
	I0717 22:51:16.691309   54573 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:51:16.705358   54573 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:16.707061   54573 kubeconfig.go:92] found "no-preload-935524" server: "https://192.168.39.6:8443"
	I0717 22:51:16.710828   54573 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:51:16.722187   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:16.722262   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:16.739474   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:17.240340   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:17.240432   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:17.255528   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:13.660641   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Start
	I0717 22:51:13.660899   53870 main.go:141] libmachine: (old-k8s-version-332820) Ensuring networks are active...
	I0717 22:51:13.661724   53870 main.go:141] libmachine: (old-k8s-version-332820) Ensuring network default is active
	I0717 22:51:13.662114   53870 main.go:141] libmachine: (old-k8s-version-332820) Ensuring network mk-old-k8s-version-332820 is active
	I0717 22:51:13.662588   53870 main.go:141] libmachine: (old-k8s-version-332820) Getting domain xml...
	I0717 22:51:13.663907   53870 main.go:141] libmachine: (old-k8s-version-332820) Creating domain...
	I0717 22:51:14.067159   53870 main.go:141] libmachine: (old-k8s-version-332820) Waiting to get IP...
	I0717 22:51:14.067897   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.068328   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.068398   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.068321   55454 retry.go:31] will retry after 239.1687ms: waiting for machine to come up
	I0717 22:51:14.309022   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.309748   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.309782   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.309696   55454 retry.go:31] will retry after 256.356399ms: waiting for machine to come up
	I0717 22:51:14.568103   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.568537   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.568572   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.568490   55454 retry.go:31] will retry after 386.257739ms: waiting for machine to come up
	I0717 22:51:14.955922   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.956518   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.956548   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.956458   55454 retry.go:31] will retry after 410.490408ms: waiting for machine to come up
	I0717 22:51:15.368904   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:15.369672   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:15.369780   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:15.369722   55454 retry.go:31] will retry after 536.865068ms: waiting for machine to come up
	I0717 22:51:15.908301   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:15.908814   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:15.908851   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:15.908774   55454 retry.go:31] will retry after 863.22272ms: waiting for machine to come up
	I0717 22:51:16.773413   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:16.773936   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:16.773971   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:16.773877   55454 retry.go:31] will retry after 858.793193ms: waiting for machine to come up
	I0717 22:51:17.634087   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:17.634588   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:17.634613   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:17.634532   55454 retry.go:31] will retry after 1.416659037s: waiting for machine to come up
	I0717 22:51:13.375358   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:15.393985   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:17.887365   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:20.250749   54649 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009864781s)
	I0717 22:51:20.250783   54649 crio.go:451] Took 3.009971 seconds to extract the tarball
	I0717 22:51:20.250793   54649 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:51:20.291666   54649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:20.341098   54649 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:51:20.341126   54649 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:51:20.341196   54649 ssh_runner.go:195] Run: crio config
	I0717 22:51:20.415138   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:51:20.415161   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:20.415171   54649 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:51:20.415185   54649 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.118 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-504828 NodeName:default-k8s-diff-port-504828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:51:20.415352   54649 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.118
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-504828"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:51:20.415432   54649 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-504828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-504828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0717 22:51:20.415488   54649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:51:20.427702   54649 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:51:20.427758   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:51:20.436950   54649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0717 22:51:20.454346   54649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:51:20.470679   54649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0717 22:51:20.491725   54649 ssh_runner.go:195] Run: grep 192.168.72.118	control-plane.minikube.internal$ /etc/hosts
	I0717 22:51:20.495952   54649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:20.511714   54649 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828 for IP: 192.168.72.118
	I0717 22:51:20.511768   54649 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:20.511949   54649 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:51:20.511997   54649 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:51:20.512100   54649 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.key
	I0717 22:51:20.512210   54649 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/apiserver.key.f316a5ec
	I0717 22:51:20.512293   54649 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/proxy-client.key
	I0717 22:51:20.512432   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:51:20.512474   54649 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:51:20.512490   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:51:20.512526   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:51:20.512563   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:51:20.512597   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:51:20.512654   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:20.513217   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:51:20.543975   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:51:20.573149   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:51:20.603536   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:51:20.632387   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:51:20.658524   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:51:20.685636   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:51:20.715849   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:51:20.746544   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:51:20.773588   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:51:20.798921   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:51:20.826004   54649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:51:20.843941   54649 ssh_runner.go:195] Run: openssl version
	I0717 22:51:20.849904   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:51:20.860510   54649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:51:20.865435   54649 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:51:20.865499   54649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:51:20.872493   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:51:20.883044   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:51:20.893448   54649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:20.898872   54649 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:20.898937   54649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:20.905231   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:51:20.915267   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:51:20.925267   54649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:51:20.929988   54649 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:51:20.930055   54649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:51:20.935935   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:51:20.945567   54649 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:51:20.950083   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:51:20.956164   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:51:20.962921   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:51:20.969329   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:51:20.975672   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:51:20.981532   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:51:20.987431   54649 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-504828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port
-504828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:51:20.987551   54649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:51:20.987640   54649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:21.020184   54649 cri.go:89] found id: ""
	I0717 22:51:21.020272   54649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:51:21.030407   54649 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:51:21.030426   54649 kubeadm.go:636] restartCluster start
	I0717 22:51:21.030484   54649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:51:21.039171   54649 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.040133   54649 kubeconfig.go:92] found "default-k8s-diff-port-504828" server: "https://192.168.72.118:8444"
	I0717 22:51:21.043010   54649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:51:21.052032   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.052083   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.063718   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.564403   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.564474   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.576250   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:22.063846   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.063915   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.077908   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:17.739595   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:17.739675   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:17.754882   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:18.240006   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:18.240109   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:18.253391   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:18.739658   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:18.739750   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:18.751666   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:19.240285   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:19.240385   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:19.254816   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:19.740338   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:19.740430   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:19.757899   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:20.240481   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:20.240561   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:20.255605   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:20.739950   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:20.740064   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:20.754552   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.240009   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.240088   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.252127   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.739671   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.739761   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.751590   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:22.239795   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.239895   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.255489   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:19.053039   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:19.053552   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:19.053577   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:19.053545   55454 retry.go:31] will retry after 1.844468395s: waiting for machine to come up
	I0717 22:51:20.899373   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:20.899955   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:20.899985   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:20.899907   55454 retry.go:31] will retry after 1.689590414s: waiting for machine to come up
	I0717 22:51:22.590651   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:22.591178   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:22.591210   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:22.591133   55454 retry.go:31] will retry after 2.006187847s: waiting for machine to come up
	I0717 22:51:20.375100   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:22.375448   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:22.564646   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.564758   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.578416   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.063819   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.063917   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.076239   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.563771   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.563906   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.577184   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.064855   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.064943   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.080926   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.563906   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.564002   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.580421   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.063993   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.064078   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.076570   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.563894   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.563978   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.575475   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.063959   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:26.064042   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:26.075498   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.564007   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:26.564068   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:26.576760   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:27.064334   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:27.064437   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:27.076567   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:22.739773   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.739859   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.752462   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.240402   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.240481   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.255896   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.740550   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.740740   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.756364   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.239721   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.239803   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.251755   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.740355   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.740455   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.751880   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.240545   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.240637   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.252165   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.739649   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.739729   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.751302   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.239861   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:26.239951   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:26.251854   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.722721   54573 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:51:26.722761   54573 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:51:26.722774   54573 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:51:26.722824   54573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:26.754496   54573 cri.go:89] found id: ""
	I0717 22:51:26.754575   54573 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:51:26.769858   54573 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:51:26.778403   54573 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:51:26.778456   54573 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:26.788782   54573 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:26.788809   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:26.926114   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:24.598549   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:24.599047   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:24.599078   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:24.598993   55454 retry.go:31] will retry after 2.77055632s: waiting for machine to come up
	I0717 22:51:27.371775   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:27.372248   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:27.372282   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:27.372196   55454 retry.go:31] will retry after 3.942088727s: waiting for machine to come up
	I0717 22:51:24.876056   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:26.876873   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:27.564363   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:27.564459   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:27.578222   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:28.063778   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:28.063883   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:28.075427   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:28.564630   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:28.564717   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:28.576903   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:29.064502   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:29.064605   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:29.075995   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:29.564295   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:29.564378   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:29.576762   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:30.063786   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:30.063870   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:30.079670   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:30.564137   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:30.564246   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:30.579055   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:31.052972   54649 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:51:31.053010   54649 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:51:31.053022   54649 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:51:31.053071   54649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:31.087580   54649 cri.go:89] found id: ""
	I0717 22:51:31.087681   54649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:51:31.103788   54649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:51:31.113570   54649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:51:31.113630   54649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:31.122993   54649 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:31.123016   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:31.254859   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:32.122277   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:32.360183   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:32.499924   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.181412   54573 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.255240525s)
	I0717 22:51:28.181446   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.398026   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.491028   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.586346   54573 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:51:28.586450   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:29.099979   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:29.599755   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:30.100095   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:30.600338   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:31.100205   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:31.129978   54573 api_server.go:72] duration metric: took 2.543631809s to wait for apiserver process to appear ...
	I0717 22:51:31.130004   54573 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:51:31.130020   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:31.316328   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.316892   53870 main.go:141] libmachine: (old-k8s-version-332820) Found IP for machine: 192.168.50.149
	I0717 22:51:31.316924   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has current primary IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.316936   53870 main.go:141] libmachine: (old-k8s-version-332820) Reserving static IP address...
	I0717 22:51:31.317425   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "old-k8s-version-332820", mac: "52:54:00:46:ca:1a", ip: "192.168.50.149"} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.317463   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | skip adding static IP to network mk-old-k8s-version-332820 - found existing host DHCP lease matching {name: "old-k8s-version-332820", mac: "52:54:00:46:ca:1a", ip: "192.168.50.149"}
	I0717 22:51:31.317486   53870 main.go:141] libmachine: (old-k8s-version-332820) Reserved static IP address: 192.168.50.149
	I0717 22:51:31.317503   53870 main.go:141] libmachine: (old-k8s-version-332820) Waiting for SSH to be available...
	I0717 22:51:31.317531   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Getting to WaitForSSH function...
	I0717 22:51:31.320209   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.320558   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.320593   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.320779   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Using SSH client type: external
	I0717 22:51:31.320810   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa (-rw-------)
	I0717 22:51:31.320862   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:51:31.320881   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | About to run SSH command:
	I0717 22:51:31.320895   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | exit 0
	I0717 22:51:31.426263   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | SSH cmd err, output: <nil>: 
	I0717 22:51:31.426659   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetConfigRaw
	I0717 22:51:31.427329   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:31.430330   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.430697   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.430739   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.431053   53870 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/config.json ...
	I0717 22:51:31.431288   53870 machine.go:88] provisioning docker machine ...
	I0717 22:51:31.431312   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:31.431531   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetMachineName
	I0717 22:51:31.431711   53870 buildroot.go:166] provisioning hostname "old-k8s-version-332820"
	I0717 22:51:31.431736   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetMachineName
	I0717 22:51:31.431959   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.434616   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.435073   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.435105   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.435246   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:31.435429   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.435578   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.435720   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:31.435889   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:31.436476   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:31.436499   53870 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-332820 && echo "old-k8s-version-332820" | sudo tee /etc/hostname
	I0717 22:51:31.589302   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-332820
	
	I0717 22:51:31.589343   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.592724   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.593180   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.593236   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.593559   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:31.593754   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.593922   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.594077   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:31.594266   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:31.594671   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:31.594696   53870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-332820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-332820/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-332820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:51:31.746218   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:51:31.746250   53870 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:51:31.746274   53870 buildroot.go:174] setting up certificates
	I0717 22:51:31.746298   53870 provision.go:83] configureAuth start
	I0717 22:51:31.746316   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetMachineName
	I0717 22:51:31.746626   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:31.750130   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.750678   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.750724   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.750781   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.753170   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.753495   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.753552   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.753654   53870 provision.go:138] copyHostCerts
	I0717 22:51:31.753715   53870 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:51:31.753728   53870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:51:31.753804   53870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:51:31.753944   53870 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:51:31.753957   53870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:51:31.753989   53870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:51:31.754072   53870 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:51:31.754085   53870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:51:31.754113   53870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:51:31.754184   53870 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-332820 san=[192.168.50.149 192.168.50.149 localhost 127.0.0.1 minikube old-k8s-version-332820]
	I0717 22:51:31.847147   53870 provision.go:172] copyRemoteCerts
	I0717 22:51:31.847203   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:51:31.847225   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.850322   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.850753   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.850810   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.851095   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:31.851414   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.851605   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:31.851784   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:31.951319   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:51:31.980515   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:51:32.010536   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 22:51:32.037399   53870 provision.go:86] duration metric: configureAuth took 291.082125ms
	I0717 22:51:32.037434   53870 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:51:32.037660   53870 config.go:182] Loaded profile config "old-k8s-version-332820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 22:51:32.037735   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.040863   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.041427   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.041534   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.041625   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.041848   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.042053   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.042225   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.042394   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:32.042812   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:32.042834   53870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:51:32.425577   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:51:32.425603   53870 machine.go:91] provisioned docker machine in 994.299178ms
	I0717 22:51:32.425615   53870 start.go:300] post-start starting for "old-k8s-version-332820" (driver="kvm2")
	I0717 22:51:32.425627   53870 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:51:32.425662   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.426023   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:51:32.426060   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.429590   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.430060   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.430087   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.430464   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.430677   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.430839   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.430955   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:32.535625   53870 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:51:32.541510   53870 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:51:32.541569   53870 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:51:32.541660   53870 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:51:32.541771   53870 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:51:32.541919   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:51:32.554113   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:32.579574   53870 start.go:303] post-start completed in 153.943669ms
	I0717 22:51:32.579597   53870 fix.go:56] fixHost completed within 18.948892402s
	I0717 22:51:32.579620   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.582411   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.582774   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.582807   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.582939   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.583181   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.583404   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.583562   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.583804   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:32.584270   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:32.584287   53870 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:51:32.727134   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634292.668672695
	
	I0717 22:51:32.727160   53870 fix.go:206] guest clock: 1689634292.668672695
	I0717 22:51:32.727171   53870 fix.go:219] Guest: 2023-07-17 22:51:32.668672695 +0000 UTC Remote: 2023-07-17 22:51:32.579600815 +0000 UTC m=+359.756107714 (delta=89.07188ms)
	I0717 22:51:32.727195   53870 fix.go:190] guest clock delta is within tolerance: 89.07188ms
	I0717 22:51:32.727201   53870 start.go:83] releasing machines lock for "old-k8s-version-332820", held for 19.096529597s
	I0717 22:51:32.727223   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.727539   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:32.730521   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.730926   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.730958   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.731115   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.731706   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.731881   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.731968   53870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:51:32.732018   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.732115   53870 ssh_runner.go:195] Run: cat /version.json
	I0717 22:51:32.732141   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.734864   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735214   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.735264   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735284   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735387   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.735561   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.735821   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.735832   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.735852   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735958   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:32.736097   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.736224   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.736329   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.736435   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:32.854136   53870 ssh_runner.go:195] Run: systemctl --version
	I0717 22:51:29.375082   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:31.376747   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:32.860997   53870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:51:33.025325   53870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:51:33.031587   53870 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:51:33.031662   53870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:51:33.046431   53870 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:51:33.046454   53870 start.go:466] detecting cgroup driver to use...
	I0717 22:51:33.046520   53870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:51:33.067265   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:51:33.079490   53870 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:51:33.079543   53870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:51:33.093639   53870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:51:33.106664   53870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:51:33.248823   53870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:51:33.414350   53870 docker.go:212] disabling docker service ...
	I0717 22:51:33.414420   53870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:51:33.428674   53870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:51:33.442140   53870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:51:33.564890   53870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:51:33.699890   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:51:33.714011   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:51:33.733726   53870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0717 22:51:33.733825   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.746603   53870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:51:33.746676   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.759291   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.772841   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.785507   53870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:51:33.798349   53870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:51:33.807468   53870 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:51:33.807578   53870 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:51:33.822587   53870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:51:33.832542   53870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:51:33.975008   53870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:51:34.192967   53870 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:51:34.193041   53870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:51:34.200128   53870 start.go:534] Will wait 60s for crictl version
	I0717 22:51:34.200194   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:34.204913   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:51:34.243900   53870 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:51:34.244054   53870 ssh_runner.go:195] Run: crio --version
	I0717 22:51:34.300151   53870 ssh_runner.go:195] Run: crio --version
	I0717 22:51:34.365344   53870 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0717 22:51:35.258235   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:51:35.258266   54573 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:51:35.758740   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:35.767634   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:35.767669   54573 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:36.259368   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:36.269761   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:36.269804   54573 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:36.759179   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:36.767717   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0717 22:51:36.783171   54573 api_server.go:141] control plane version: v1.27.3
	I0717 22:51:36.783277   54573 api_server.go:131] duration metric: took 5.653264463s to wait for apiserver health ...
	I0717 22:51:36.783299   54573 cni.go:84] Creating CNI manager for ""
	I0717 22:51:36.783320   54573 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:36.785787   54573 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:51:32.594699   54649 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:51:32.594791   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:33.112226   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:33.611860   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:34.112071   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:34.611354   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:35.111291   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:35.611869   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:35.637583   54649 api_server.go:72] duration metric: took 3.042882856s to wait for apiserver process to appear ...
	I0717 22:51:35.637607   54649 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:51:35.637624   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:36.787709   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:51:36.808980   54573 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:51:36.862525   54573 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:51:36.878653   54573 system_pods.go:59] 8 kube-system pods found
	I0717 22:51:36.878761   54573 system_pods.go:61] "coredns-5d78c9869d-2mpst" [7516b57f-a4cb-4e2f-995e-8e063bed22ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:51:36.878788   54573 system_pods.go:61] "etcd-no-preload-935524" [b663c4f9-d98e-457d-b511-435bea5e9525] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:51:36.878827   54573 system_pods.go:61] "kube-apiserver-no-preload-935524" [fb6f55fc-8705-46aa-a23c-e7870e52e542] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:51:36.878852   54573 system_pods.go:61] "kube-controller-manager-no-preload-935524" [37d43d22-e857-4a9b-b0cb-a9fc39931baa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:51:36.878874   54573 system_pods.go:61] "kube-proxy-qhp66" [8bc95955-b7ba-41e3-ac67-604a9695f784] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:51:36.878913   54573 system_pods.go:61] "kube-scheduler-no-preload-935524" [86fef4e0-b156-421a-8d53-0d34cae2cdb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:51:36.878940   54573 system_pods.go:61] "metrics-server-74d5c6b9c-tlbpl" [7c478efe-4435-45dd-a688-745872fc2918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:51:36.878959   54573 system_pods.go:61] "storage-provisioner" [85812d54-7a57-430b-991e-e301f123a86a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 22:51:36.878991   54573 system_pods.go:74] duration metric: took 16.439496ms to wait for pod list to return data ...
	I0717 22:51:36.879014   54573 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:51:36.886556   54573 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:51:36.886669   54573 node_conditions.go:123] node cpu capacity is 2
	I0717 22:51:36.886694   54573 node_conditions.go:105] duration metric: took 7.665172ms to run NodePressure ...
	I0717 22:51:36.886743   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:37.408758   54573 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:51:37.426705   54573 kubeadm.go:787] kubelet initialised
	I0717 22:51:37.426750   54573 kubeadm.go:788] duration metric: took 17.898411ms waiting for restarted kubelet to initialise ...
	I0717 22:51:37.426760   54573 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:37.442893   54573 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.449989   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.450020   54573 pod_ready.go:81] duration metric: took 7.096248ms waiting for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.450032   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.450043   54573 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.460343   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "etcd-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.460423   54573 pod_ready.go:81] duration metric: took 10.370601ms waiting for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.460468   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "etcd-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.460481   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.475124   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-apiserver-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.475203   54573 pod_ready.go:81] duration metric: took 14.713192ms waiting for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.475224   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-apiserver-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.475242   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.486443   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.486529   54573 pod_ready.go:81] duration metric: took 11.253247ms waiting for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.486551   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.486570   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:34.367014   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:34.370717   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:34.371243   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:34.371272   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:34.371626   53870 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 22:51:34.380223   53870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:34.395496   53870 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 22:51:34.395564   53870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:34.440412   53870 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 22:51:34.440486   53870 ssh_runner.go:195] Run: which lz4
	I0717 22:51:34.445702   53870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:51:34.451213   53870 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:51:34.451259   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0717 22:51:36.330808   53870 crio.go:444] Took 1.885143 seconds to copy over tarball
	I0717 22:51:36.330866   53870 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:51:33.377108   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:35.379770   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:37.382141   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:37.819308   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-proxy-qhp66" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.819393   54573 pod_ready.go:81] duration metric: took 332.789076ms waiting for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.819414   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-proxy-qhp66" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.819430   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:38.213914   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-scheduler-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.213947   54573 pod_ready.go:81] duration metric: took 394.500573ms waiting for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:38.213957   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-scheduler-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.213967   54573 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:38.617826   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.617855   54573 pod_ready.go:81] duration metric: took 403.88033ms waiting for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:38.617867   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.617878   54573 pod_ready.go:38] duration metric: took 1.191105641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:38.617907   54573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:51:38.634486   54573 ops.go:34] apiserver oom_adj: -16
	I0717 22:51:38.634511   54573 kubeadm.go:640] restartCluster took 21.94326064s
	I0717 22:51:38.634520   54573 kubeadm.go:406] StartCluster complete in 21.998122781s
	I0717 22:51:38.634560   54573 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:38.634648   54573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:51:38.637414   54573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:38.637733   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:51:38.637868   54573 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:51:38.637955   54573 addons.go:69] Setting storage-provisioner=true in profile "no-preload-935524"
	I0717 22:51:38.637972   54573 addons.go:231] Setting addon storage-provisioner=true in "no-preload-935524"
	W0717 22:51:38.637986   54573 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:51:38.638036   54573 host.go:66] Checking if "no-preload-935524" exists ...
	I0717 22:51:38.638418   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.638441   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.638510   54573 addons.go:69] Setting default-storageclass=true in profile "no-preload-935524"
	I0717 22:51:38.638530   54573 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-935524"
	I0717 22:51:38.638684   54573 addons.go:69] Setting metrics-server=true in profile "no-preload-935524"
	I0717 22:51:38.638700   54573 addons.go:231] Setting addon metrics-server=true in "no-preload-935524"
	W0717 22:51:38.638707   54573 addons.go:240] addon metrics-server should already be in state true
	I0717 22:51:38.638751   54573 host.go:66] Checking if "no-preload-935524" exists ...
	I0717 22:51:38.638977   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.639016   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.639083   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.639106   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.644028   54573 config.go:182] Loaded profile config "no-preload-935524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:51:38.656131   54573 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-935524" context rescaled to 1 replicas
	I0717 22:51:38.656182   54573 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:51:38.658128   54573 out.go:177] * Verifying Kubernetes components...
	I0717 22:51:38.659350   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I0717 22:51:38.662767   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:51:38.660678   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46603
	I0717 22:51:38.663403   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.664191   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.664207   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.664296   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46321
	I0717 22:51:38.664660   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.664872   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.665287   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.665301   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.665363   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.666826   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.667345   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.667411   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.667432   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.667875   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.667888   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.669299   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.669907   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.669941   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.689870   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0717 22:51:38.690029   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43747
	I0717 22:51:38.690596   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.691039   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.691052   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.691354   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.691782   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.691932   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.691942   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.692153   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.692209   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.692391   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.693179   54573 addons.go:231] Setting addon default-storageclass=true in "no-preload-935524"
	W0717 22:51:38.693197   54573 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:51:38.693226   54573 host.go:66] Checking if "no-preload-935524" exists ...
	I0717 22:51:38.693599   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.693627   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.695740   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:51:38.698283   54573 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:51:38.696822   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:51:38.700282   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:51:38.700294   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:51:38.700313   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:51:38.702588   54573 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:38.704435   54573 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:51:38.704453   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:51:38.704470   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:51:38.704034   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.704509   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:51:38.704545   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.705314   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:51:38.705704   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:51:38.705962   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:51:38.706101   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:51:38.707998   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.708366   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:51:38.708391   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.708663   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:51:38.708827   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:51:38.708935   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:51:38.709039   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:51:38.715303   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0717 22:51:38.715765   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.716225   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.716238   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.716515   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.716900   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.716915   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.775381   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0717 22:51:38.781850   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.782856   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.782886   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.783335   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.783547   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.786539   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:51:38.786818   54573 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:51:38.786841   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:51:38.786860   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:51:38.789639   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.793649   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:51:38.793678   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:51:38.793701   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.793926   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:51:38.794106   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:51:38.794262   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:51:38.862651   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:51:38.862675   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:51:38.914260   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:51:38.914294   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:51:38.933208   54573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:51:38.959784   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:51:38.959817   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:51:38.977205   54573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:51:39.028067   54573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:51:39.145640   54573 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 22:51:39.145688   54573 node_ready.go:35] waiting up to 6m0s for node "no-preload-935524" to be "Ready" ...
	I0717 22:51:40.593928   54573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.616678929s)
	I0717 22:51:40.593974   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.593987   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.594018   54573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.660755961s)
	I0717 22:51:40.594062   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.594078   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.594360   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.594377   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.594388   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.594397   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.596155   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596173   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.596184   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.596201   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.596345   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.596378   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596393   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.596406   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.596415   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.596536   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.596579   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596597   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.596672   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.596706   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596716   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.766149   54573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.73803779s)
	I0717 22:51:40.766218   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.766233   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.766573   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.766619   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.766629   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.766639   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.766648   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.766954   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.766987   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.766996   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.767004   54573 addons.go:467] Verifying addon metrics-server=true in "no-preload-935524"
	I0717 22:51:40.921642   54573 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 22:51:40.099354   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:51:40.099395   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:51:40.600101   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:40.606334   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:40.606375   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:41.100086   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:41.110410   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:41.110443   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:41.599684   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:41.615650   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:41.615693   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:42.100229   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:42.109347   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:42.109400   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:42.600180   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:42.607799   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 200:
	ok
	I0717 22:51:42.621454   54649 api_server.go:141] control plane version: v1.27.3
	I0717 22:51:42.621480   54649 api_server.go:131] duration metric: took 6.983866635s to wait for apiserver health ...
	I0717 22:51:42.621491   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:51:42.621503   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:42.623222   54649 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:51:41.140227   54573 addons.go:502] enable addons completed in 2.502347716s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 22:51:41.154857   54573 node_ready.go:58] node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:40.037161   53870 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.706262393s)
	I0717 22:51:40.037203   53870 crio.go:451] Took 3.706370 seconds to extract the tarball
	I0717 22:51:40.037215   53870 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:51:40.089356   53870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:40.143494   53870 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 22:51:40.143520   53870 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 22:51:40.143582   53870 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:40.143803   53870 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 22:51:40.143819   53870 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.143889   53870 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.143972   53870 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.143979   53870 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.144036   53870 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.144084   53870 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.151367   53870 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:40.151467   53870 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 22:51:40.152588   53870 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.152741   53870 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.152887   53870 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.152985   53870 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.153357   53870 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.153384   53870 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.317883   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.322240   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.325883   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 22:51:40.325883   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.326725   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.328193   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.356171   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.485259   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:40.493227   53870 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 22:51:40.493266   53870 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.493304   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.514366   53870 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 22:51:40.514409   53870 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.514459   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578201   53870 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 22:51:40.578304   53870 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.578312   53870 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 22:51:40.578342   53870 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.578363   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578396   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578451   53870 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 22:51:40.578485   53870 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.578534   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578248   53870 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 22:51:40.578638   53870 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0717 22:51:40.578247   53870 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 22:51:40.578717   53870 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.578756   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578688   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.717404   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.717482   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.717627   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.717740   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.717814   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0717 22:51:40.717918   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.718015   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.856246   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 22:51:40.856291   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 22:51:40.856403   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 22:51:40.856438   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 22:51:40.856526   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 22:51:40.856575   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 22:51:40.856604   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 22:51:40.856653   53870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0717 22:51:40.861702   53870 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0717 22:51:40.861718   53870 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0717 22:51:40.861766   53870 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0717 22:51:42.019439   53870 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.157649631s)
	I0717 22:51:42.019471   53870 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0717 22:51:42.019512   53870 cache_images.go:92] LoadImages completed in 1.875976905s
	W0717 22:51:42.019588   53870 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0717 22:51:42.019667   53870 ssh_runner.go:195] Run: crio config
	I0717 22:51:42.084276   53870 cni.go:84] Creating CNI manager for ""
	I0717 22:51:42.084310   53870 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:42.084329   53870 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:51:42.084352   53870 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.149 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-332820 NodeName:old-k8s-version-332820 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 22:51:42.084534   53870 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-332820"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-332820
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.149:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:51:42.084631   53870 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-332820 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-332820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:51:42.084705   53870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 22:51:42.095493   53870 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:51:42.095576   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:51:42.106777   53870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 22:51:42.126860   53870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:51:42.146610   53870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0717 22:51:42.167959   53870 ssh_runner.go:195] Run: grep 192.168.50.149	control-plane.minikube.internal$ /etc/hosts
	I0717 22:51:42.171993   53870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:42.188635   53870 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820 for IP: 192.168.50.149
	I0717 22:51:42.188673   53870 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:42.188887   53870 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:51:42.188945   53870 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:51:42.189042   53870 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.key
	I0717 22:51:42.189125   53870 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/apiserver.key.7e281e16
	I0717 22:51:42.189177   53870 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/proxy-client.key
	I0717 22:51:42.189322   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:51:42.189362   53870 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:51:42.189377   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:51:42.189413   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:51:42.189456   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:51:42.189502   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:51:42.189590   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:42.190495   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:51:42.219201   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 22:51:42.248355   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:51:42.275885   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 22:51:42.303987   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:51:42.329331   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:51:42.354424   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:51:42.386422   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:51:42.418872   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:51:42.448869   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:51:42.473306   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:51:42.499302   53870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:51:42.519833   53870 ssh_runner.go:195] Run: openssl version
	I0717 22:51:42.525933   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:51:42.537165   53870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:51:42.545354   53870 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:51:42.545419   53870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:51:42.551786   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:51:42.561900   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:51:42.571880   53870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:42.576953   53870 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:42.577017   53870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:42.583311   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:51:42.593618   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:51:42.604326   53870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:51:42.610022   53870 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:51:42.610084   53870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:51:42.615999   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:51:42.627353   53870 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:51:42.632186   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:51:42.638738   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:51:42.645118   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:51:42.651619   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:51:42.658542   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:51:42.665449   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:51:42.673656   53870 kubeadm.go:404] StartCluster: {Name:old-k8s-version-332820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-332820 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.149 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:51:42.673776   53870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:51:42.673832   53870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:42.718032   53870 cri.go:89] found id: ""
	I0717 22:51:42.718127   53870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:51:42.731832   53870 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:51:42.731856   53870 kubeadm.go:636] restartCluster start
	I0717 22:51:42.731907   53870 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:51:42.741531   53870 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:42.743035   53870 kubeconfig.go:92] found "old-k8s-version-332820" server: "https://192.168.50.149:8443"
	I0717 22:51:42.746440   53870 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:51:42.755816   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:42.755878   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:42.768767   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:39.384892   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:41.876361   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:42.624643   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:51:42.660905   54649 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:51:42.733831   54649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:51:42.761055   54649 system_pods.go:59] 8 kube-system pods found
	I0717 22:51:42.761093   54649 system_pods.go:61] "coredns-5d78c9869d-wpmhl" [ebfdf1a8-16b1-4e11-8bda-0b6afa127ed2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:51:42.761113   54649 system_pods.go:61] "etcd-default-k8s-diff-port-504828" [47338c6f-2509-4051-acaa-7281bbafe376] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:51:42.761125   54649 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504828" [16961d82-f852-4c99-81af-a5b6290222d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:51:42.761138   54649 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504828" [9e226305-9f41-4e56-8f8d-a250f46ab852] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:51:42.761165   54649 system_pods.go:61] "kube-proxy-kbp9x" [5a581d9c-4efa-49b7-8bd9-b877d5d12871] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:51:42.761183   54649 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504828" [0d63a508-5b2b-4b61-b087-afdd063afbfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:51:42.761197   54649 system_pods.go:61] "metrics-server-74d5c6b9c-tj4st" [2cd90033-b07a-4458-8dac-5a618d4ed7ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:51:42.761207   54649 system_pods.go:61] "storage-provisioner" [c306122c-f32a-4455-a825-3e272a114ddc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 22:51:42.761217   54649 system_pods.go:74] duration metric: took 27.36753ms to wait for pod list to return data ...
	I0717 22:51:42.761226   54649 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:51:42.766615   54649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:51:42.766640   54649 node_conditions.go:123] node cpu capacity is 2
	I0717 22:51:42.766651   54649 node_conditions.go:105] duration metric: took 5.41582ms to run NodePressure ...
	I0717 22:51:42.766666   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:43.144614   54649 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:51:43.151192   54649 kubeadm.go:787] kubelet initialised
	I0717 22:51:43.151229   54649 kubeadm.go:788] duration metric: took 6.579448ms waiting for restarted kubelet to initialise ...
	I0717 22:51:43.151245   54649 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:43.157867   54649 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:45.174145   54649 pod_ready.go:102] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:47.177320   54649 pod_ready.go:102] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:43.656678   54573 node_ready.go:58] node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:46.154037   54573 node_ready.go:49] node "no-preload-935524" has status "Ready":"True"
	I0717 22:51:46.154060   54573 node_ready.go:38] duration metric: took 7.008304923s waiting for node "no-preload-935524" to be "Ready" ...
	I0717 22:51:46.154068   54573 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:46.161581   54573 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:46.167554   54573 pod_ready.go:92] pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:46.167581   54573 pod_ready.go:81] duration metric: took 5.973951ms waiting for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:46.167593   54573 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:43.269246   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:43.269363   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:43.281553   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:43.769539   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:43.769648   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:43.784373   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:44.268932   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:44.269030   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:44.280678   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:44.769180   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:44.769268   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:44.782107   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:45.269718   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:45.269795   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:45.282616   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:45.768937   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:45.769014   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:45.782121   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:46.269531   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:46.269628   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:46.281901   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:46.769344   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:46.769437   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:46.784477   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:47.268980   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:47.269070   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:47.280858   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:47.769478   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:47.769577   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:47.783095   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:44.373907   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:46.375240   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:49.671705   54649 pod_ready.go:102] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:50.172053   54649 pod_ready.go:92] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:50.172081   54649 pod_ready.go:81] duration metric: took 7.014190645s waiting for pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:50.172094   54649 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:52.186327   54649 pod_ready.go:102] pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:48.180621   54573 pod_ready.go:92] pod "etcd-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.180653   54573 pod_ready.go:81] duration metric: took 2.0130508s waiting for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.180666   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.185965   54573 pod_ready.go:92] pod "kube-apiserver-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.185985   54573 pod_ready.go:81] duration metric: took 5.310471ms waiting for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.185996   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.191314   54573 pod_ready.go:92] pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.191335   54573 pod_ready.go:81] duration metric: took 5.331248ms waiting for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.191346   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.197557   54573 pod_ready.go:92] pod "kube-proxy-qhp66" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.197576   54573 pod_ready.go:81] duration metric: took 6.222911ms waiting for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.197586   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:50.567470   54573 pod_ready.go:92] pod "kube-scheduler-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:50.567494   54573 pod_ready.go:81] duration metric: took 2.369900836s waiting for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:50.567504   54573 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:52.582697   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:48.269386   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:48.269464   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:48.281178   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:48.769171   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:48.769255   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:48.781163   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:49.269813   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:49.269890   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:49.282099   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:49.769555   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:49.769659   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:49.782298   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:50.269111   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:50.269176   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:50.280805   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:50.769333   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:50.769438   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:50.781760   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:51.269299   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:51.269368   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:51.281559   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:51.769032   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:51.769096   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:51.780505   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:52.269033   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:52.269134   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:52.281362   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:52.755841   53870 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:51:52.755871   53870 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:51:52.755882   53870 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:51:52.755945   53870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:52.789292   53870 cri.go:89] found id: ""
	I0717 22:51:52.789370   53870 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:51:52.805317   53870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:51:52.814714   53870 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:51:52.814778   53870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:52.824024   53870 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:52.824045   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:48.376709   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:50.877922   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:54.187055   54649 pod_ready.go:92] pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.187076   54649 pod_ready.go:81] duration metric: took 4.01497478s waiting for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.187084   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.195396   54649 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.195426   54649 pod_ready.go:81] duration metric: took 8.33448ms waiting for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.195440   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.205666   54649 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.205694   54649 pod_ready.go:81] duration metric: took 10.243213ms waiting for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.205713   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kbp9x" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.217007   54649 pod_ready.go:92] pod "kube-proxy-kbp9x" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.217030   54649 pod_ready.go:81] duration metric: took 11.309771ms waiting for pod "kube-proxy-kbp9x" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.217041   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.225509   54649 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.225558   54649 pod_ready.go:81] duration metric: took 8.507279ms waiting for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.225572   54649 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:56.592993   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:54.582860   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:56.583634   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:52.949663   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:53.985430   53870 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.035733754s)
	I0717 22:51:53.985459   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:54.222833   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:54.357196   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:54.468442   53870 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:51:54.468516   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:54.999095   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:55.499700   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:55.999447   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:56.051829   53870 api_server.go:72] duration metric: took 1.583387644s to wait for apiserver process to appear ...
	I0717 22:51:56.051856   53870 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:51:56.051872   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:51:53.374486   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:55.375033   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:57.376561   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:59.093181   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:01.592585   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:59.084169   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:01.583540   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:01.053643   53870 api_server.go:269] stopped: https://192.168.50.149:8443/healthz: Get "https://192.168.50.149:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 22:52:01.554418   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:01.627371   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:52:01.627400   53870 api_server.go:103] status: https://192.168.50.149:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:52:02.054761   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:02.060403   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 22:52:02.060431   53870 api_server.go:103] status: https://192.168.50.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 22:52:02.554085   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:02.561664   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 22:52:02.561699   53870 api_server.go:103] status: https://192.168.50.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 22:51:59.876307   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:02.374698   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:03.054028   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:03.061055   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 200:
	ok
	I0717 22:52:03.069434   53870 api_server.go:141] control plane version: v1.16.0
	I0717 22:52:03.069465   53870 api_server.go:131] duration metric: took 7.017602055s to wait for apiserver health ...
	I0717 22:52:03.069475   53870 cni.go:84] Creating CNI manager for ""
	I0717 22:52:03.069485   53870 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:52:03.071306   53870 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:52:04.092490   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:06.592435   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:04.082787   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:06.089097   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:03.073009   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:52:03.085399   53870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:52:03.106415   53870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:52:03.117136   53870 system_pods.go:59] 7 kube-system pods found
	I0717 22:52:03.117181   53870 system_pods.go:61] "coredns-5644d7b6d9-s9vtg" [7a1ccabb-ad03-47ef-804a-eff0b00ea65c] Running
	I0717 22:52:03.117191   53870 system_pods.go:61] "etcd-old-k8s-version-332820" [a1c2ef8d-fdb3-4394-944b-042870d25c4b] Running
	I0717 22:52:03.117198   53870 system_pods.go:61] "kube-apiserver-old-k8s-version-332820" [39a09f85-abd5-442a-887d-c04a91b87258] Running
	I0717 22:52:03.117206   53870 system_pods.go:61] "kube-controller-manager-old-k8s-version-332820" [94c599c4-d22c-4b5e-bf7b-ce0b81e21283] Running
	I0717 22:52:03.117212   53870 system_pods.go:61] "kube-proxy-vkjpn" [8fe8844c-f199-4bcb-b6a0-c6023c06ef75] Running
	I0717 22:52:03.117219   53870 system_pods.go:61] "kube-scheduler-old-k8s-version-332820" [a2102927-3de6-45d8-a37e-665adde8ca47] Running
	I0717 22:52:03.117227   53870 system_pods.go:61] "storage-provisioner" [b9bcb25d-294e-49ae-8650-98b1c7e5b4f8] Running
	I0717 22:52:03.117234   53870 system_pods.go:74] duration metric: took 10.793064ms to wait for pod list to return data ...
	I0717 22:52:03.117247   53870 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:52:03.122227   53870 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:52:03.122275   53870 node_conditions.go:123] node cpu capacity is 2
	I0717 22:52:03.122294   53870 node_conditions.go:105] duration metric: took 5.039156ms to run NodePressure ...
	I0717 22:52:03.122322   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:52:03.337823   53870 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:52:03.342104   53870 retry.go:31] will retry after 190.852011ms: kubelet not initialised
	I0717 22:52:03.537705   53870 retry.go:31] will retry after 190.447443ms: kubelet not initialised
	I0717 22:52:03.735450   53870 retry.go:31] will retry after 294.278727ms: kubelet not initialised
	I0717 22:52:04.034965   53870 retry.go:31] will retry after 808.339075ms: kubelet not initialised
	I0717 22:52:04.847799   53870 retry.go:31] will retry after 1.685522396s: kubelet not initialised
	I0717 22:52:06.537765   53870 retry.go:31] will retry after 1.595238483s: kubelet not initialised
	I0717 22:52:04.377461   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:06.876135   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:09.090739   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:11.093234   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:08.583118   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:11.083446   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:08.139297   53870 retry.go:31] will retry after 4.170190829s: kubelet not initialised
	I0717 22:52:12.317346   53870 retry.go:31] will retry after 5.652204651s: kubelet not initialised
	I0717 22:52:09.374610   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:11.375332   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:13.590999   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:15.591041   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:13.583868   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:16.081948   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:13.376027   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:15.874857   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:17.876130   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:17.593544   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:20.092121   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:18.082068   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:20.083496   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:22.582358   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:17.975640   53870 retry.go:31] will retry after 6.695949238s: kubelet not initialised
	I0717 22:52:20.375494   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:22.882209   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:22.591705   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:25.090965   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:25.082268   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:27.582422   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:24.676746   53870 retry.go:31] will retry after 10.942784794s: kubelet not initialised
	I0717 22:52:25.374526   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:27.375728   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:27.591516   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:30.091872   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:30.081334   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:32.082535   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:29.874508   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:31.876648   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:32.592067   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:35.092067   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:34.082954   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:36.585649   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:35.625671   53870 retry.go:31] will retry after 20.23050626s: kubelet not initialised
	I0717 22:52:34.376118   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:36.875654   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:37.592201   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:40.091539   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:39.081430   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:41.082360   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:39.374867   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:41.375759   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:42.590417   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:44.591742   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:46.593256   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:43.083211   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:45.084404   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:47.085099   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:43.376030   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:45.873482   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:47.875479   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:49.092376   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:51.592430   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:49.582087   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:52.083003   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:49.878981   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:52.374685   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:54.090617   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:56.091597   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:54.583455   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:57.081342   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:55.864261   53870 kubeadm.go:787] kubelet initialised
	I0717 22:52:55.864281   53870 kubeadm.go:788] duration metric: took 52.526433839s waiting for restarted kubelet to initialise ...
	I0717 22:52:55.864287   53870 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:52:55.870685   53870 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.877709   53870 pod_ready.go:92] pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.877737   53870 pod_ready.go:81] duration metric: took 7.026411ms waiting for pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.877750   53870 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-vnldz" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.883932   53870 pod_ready.go:92] pod "coredns-5644d7b6d9-vnldz" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.883961   53870 pod_ready.go:81] duration metric: took 6.200731ms waiting for pod "coredns-5644d7b6d9-vnldz" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.883974   53870 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.889729   53870 pod_ready.go:92] pod "etcd-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.889749   53870 pod_ready.go:81] duration metric: took 5.767797ms waiting for pod "etcd-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.889757   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.895286   53870 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.895308   53870 pod_ready.go:81] duration metric: took 5.545198ms waiting for pod "kube-apiserver-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.895316   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.263125   53870 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:56.263153   53870 pod_ready.go:81] duration metric: took 367.829768ms waiting for pod "kube-controller-manager-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.263166   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vkjpn" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.663235   53870 pod_ready.go:92] pod "kube-proxy-vkjpn" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:56.663262   53870 pod_ready.go:81] duration metric: took 400.086969ms waiting for pod "kube-proxy-vkjpn" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.663276   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:57.061892   53870 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:57.061917   53870 pod_ready.go:81] duration metric: took 398.633591ms waiting for pod "kube-scheduler-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:57.061930   53870 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:54.374907   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:56.875242   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:58.092082   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:00.590626   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:59.081826   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:01.086158   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:59.469353   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:01.968383   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:59.374420   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:01.374640   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:02.595710   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:05.094211   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:03.582006   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:05.582348   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:07.582585   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:03.969801   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:06.469220   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:03.374665   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:05.375182   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:07.874673   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:07.593189   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:10.091253   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:10.083277   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:12.581195   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:08.973101   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:11.471187   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:10.375255   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:12.875038   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:12.593192   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:15.090204   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:17.091416   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:14.581962   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:17.082092   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:13.970246   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:16.469918   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:15.374678   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:17.375402   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:19.592518   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:22.090462   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:19.582582   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:21.582788   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:18.969975   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:21.471221   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:19.876416   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:22.377064   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:24.592012   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:26.593013   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:24.082409   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:26.581889   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:23.967680   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:25.969061   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:24.876092   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:26.876727   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:29.090937   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:31.092276   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:28.583371   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:30.588656   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:28.470667   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:30.969719   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:29.374066   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:31.375107   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.590361   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.591199   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.082794   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.583369   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.468669   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.468917   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:37.469656   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.873830   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.875551   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:38.091032   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:40.095610   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:38.083632   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:40.584069   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:39.970389   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:41.972121   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:38.374344   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:40.375117   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:42.873817   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:42.591348   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:44.591801   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:47.091463   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:43.092800   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:45.583147   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:44.468092   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:46.968583   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:44.875165   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:46.875468   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:49.592016   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:52.092191   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:48.082358   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:50.581430   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:52.581722   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:48.970562   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:51.469666   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:49.374655   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:51.374912   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:54.590857   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:57.090986   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:54.581979   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:57.081602   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:53.969845   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:56.470092   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:53.874630   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:56.374076   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:59.093019   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:01.590296   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:59.581481   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:02.081651   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:58.969243   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:00.969793   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:58.874500   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:00.875485   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:03.591663   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:06.091377   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:04.082661   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:06.581409   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:02.969900   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:05.469513   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:07.469630   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:03.374576   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:05.874492   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:07.876025   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:08.092299   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:10.591576   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:08.582962   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:11.081623   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:09.469674   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:11.970568   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:09.878298   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:12.375542   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:13.089815   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:15.091295   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:13.082485   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:15.582545   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:14.469264   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:16.970184   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:14.876188   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:17.375197   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:17.590457   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:19.590668   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:21.592281   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:18.082882   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:20.581232   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:22.581451   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:19.470007   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:21.972545   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:19.874905   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:21.876111   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.090912   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:26.091423   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.582104   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:27.082466   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.468612   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:26.468733   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.375195   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:26.375302   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:28.092426   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:30.590750   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:29.083200   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:31.581109   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:28.469411   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:30.474485   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:28.376063   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:30.874877   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:32.875720   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:32.591688   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:34.592382   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:37.091435   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:33.582072   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:35.582710   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:32.968863   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:34.969408   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:37.469461   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:35.375657   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:37.873420   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:39.091786   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:41.591723   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:38.082103   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:40.582480   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:39.470591   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:41.969425   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:39.876026   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:41.876450   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:44.090732   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:46.091209   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:43.082746   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:45.580745   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:47.581165   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:44.469624   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:46.469853   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:44.375526   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:46.874381   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:48.091542   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:50.591973   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:49.583795   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:52.084521   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:48.969202   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:50.969996   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:48.874772   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:50.876953   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:53.092284   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:55.591945   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:54.582260   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:56.582456   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:53.468921   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:55.469467   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:57.469588   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:53.375369   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:55.375834   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:57.875412   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:58.092340   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:00.593507   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:58.582790   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:01.082714   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:59.968899   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:01.970513   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:59.876100   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:02.377093   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:02.594240   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:05.091858   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:03.584934   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:06.082560   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:04.469605   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:06.470074   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:04.874495   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:06.874619   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:07.591151   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:09.594253   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:12.092136   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:08.082731   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:10.594934   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:08.970358   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:10.972021   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:08.875055   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:10.875177   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:11.360474   54248 pod_ready.go:81] duration metric: took 4m0.00020957s waiting for pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace to be "Ready" ...
	E0717 22:55:11.360506   54248 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:55:11.360523   54248 pod_ready.go:38] duration metric: took 4m12.083431067s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:55:11.360549   54248 kubeadm.go:640] restartCluster took 4m32.267522493s
	W0717 22:55:11.360621   54248 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 22:55:11.360653   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 22:55:14.094015   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:16.590201   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:13.082448   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:15.581674   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:17.582135   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:13.471096   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:15.970057   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:18.591981   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:21.091787   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:19.584462   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:22.082310   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:18.469828   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:20.970377   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:23.092278   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:25.594454   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:24.583377   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:27.082479   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:23.470427   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:25.473350   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:28.091878   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:30.092032   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:29.582576   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:31.584147   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:27.969045   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:30.468478   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:32.469942   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:32.591274   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:34.591477   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:37.089772   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:34.082460   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:36.082687   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:34.470431   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:36.470791   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:39.091253   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:41.091286   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:38.082836   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:40.581494   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:42.583634   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:38.969011   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:40.969922   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:43.092434   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:45.591302   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:45.083869   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:47.582454   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:43.468968   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:45.469340   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:47.471805   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:43.113858   54248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.753186356s)
	I0717 22:55:43.113920   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:55:43.128803   54248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:55:43.138891   54248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:55:43.148155   54248 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:55:43.148209   54248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 22:55:43.357368   54248 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:55:47.591967   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:50.092046   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:52.092670   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:50.081152   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:50.568456   54573 pod_ready.go:81] duration metric: took 4m0.000934324s waiting for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	E0717 22:55:50.568492   54573 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:55:50.568506   54573 pod_ready.go:38] duration metric: took 4m4.414427298s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:55:50.568531   54573 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:55:50.568581   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 22:55:50.568650   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 22:55:50.622016   54573 cri.go:89] found id: "c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:50.622048   54573 cri.go:89] found id: ""
	I0717 22:55:50.622058   54573 logs.go:284] 1 containers: [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f]
	I0717 22:55:50.622114   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.627001   54573 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 22:55:50.627065   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 22:55:50.665053   54573 cri.go:89] found id: "98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:50.665073   54573 cri.go:89] found id: ""
	I0717 22:55:50.665082   54573 logs.go:284] 1 containers: [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea]
	I0717 22:55:50.665143   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.670198   54573 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 22:55:50.670261   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 22:55:50.705569   54573 cri.go:89] found id: "acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:50.705595   54573 cri.go:89] found id: ""
	I0717 22:55:50.705604   54573 logs.go:284] 1 containers: [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266]
	I0717 22:55:50.705669   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.710494   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 22:55:50.710569   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 22:55:50.772743   54573 cri.go:89] found id: "692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:50.772768   54573 cri.go:89] found id: ""
	I0717 22:55:50.772776   54573 logs.go:284] 1 containers: [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629]
	I0717 22:55:50.772831   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.777741   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 22:55:50.777813   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 22:55:50.809864   54573 cri.go:89] found id: "9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:50.809892   54573 cri.go:89] found id: ""
	I0717 22:55:50.809903   54573 logs.go:284] 1 containers: [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567]
	I0717 22:55:50.809963   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.814586   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 22:55:50.814654   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 22:55:50.850021   54573 cri.go:89] found id: "f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:50.850047   54573 cri.go:89] found id: ""
	I0717 22:55:50.850056   54573 logs.go:284] 1 containers: [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c]
	I0717 22:55:50.850125   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.854615   54573 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 22:55:50.854685   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 22:55:50.893272   54573 cri.go:89] found id: ""
	I0717 22:55:50.893300   54573 logs.go:284] 0 containers: []
	W0717 22:55:50.893310   54573 logs.go:286] No container was found matching "kindnet"
	I0717 22:55:50.893318   54573 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 22:55:50.893377   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 22:55:50.926652   54573 cri.go:89] found id: "a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:50.926676   54573 cri.go:89] found id: "4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:50.926682   54573 cri.go:89] found id: ""
	I0717 22:55:50.926690   54573 logs.go:284] 2 containers: [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6]
	I0717 22:55:50.926747   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.931220   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.935745   54573 logs.go:123] Gathering logs for kubelet ...
	I0717 22:55:50.935772   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 22:55:51.002727   54573 logs.go:123] Gathering logs for kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] ...
	I0717 22:55:51.002760   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:51.046774   54573 logs.go:123] Gathering logs for kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] ...
	I0717 22:55:51.046811   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:51.081441   54573 logs.go:123] Gathering logs for storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] ...
	I0717 22:55:51.081472   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:51.119354   54573 logs.go:123] Gathering logs for CRI-O ...
	I0717 22:55:51.119394   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 22:55:51.710591   54573 logs.go:123] Gathering logs for kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] ...
	I0717 22:55:51.710634   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:51.758647   54573 logs.go:123] Gathering logs for storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] ...
	I0717 22:55:51.758679   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:51.792417   54573 logs.go:123] Gathering logs for container status ...
	I0717 22:55:51.792458   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 22:55:51.836268   54573 logs.go:123] Gathering logs for dmesg ...
	I0717 22:55:51.836302   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 22:55:51.852009   54573 logs.go:123] Gathering logs for describe nodes ...
	I0717 22:55:51.852038   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 22:55:52.018156   54573 logs.go:123] Gathering logs for kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] ...
	I0717 22:55:52.018191   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:52.061680   54573 logs.go:123] Gathering logs for etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] ...
	I0717 22:55:52.061723   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:52.105407   54573 logs.go:123] Gathering logs for coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] ...
	I0717 22:55:52.105437   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:49.969074   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:51.969157   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:54.934299   54248 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 22:55:54.934395   54248 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:55:54.934498   54248 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:55:54.934616   54248 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:55:54.934741   54248 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:55:54.934823   54248 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:55:54.936386   54248 out.go:204]   - Generating certificates and keys ...
	I0717 22:55:54.936475   54248 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:55:54.936548   54248 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:55:54.936643   54248 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:55:54.936719   54248 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 22:55:54.936803   54248 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:55:54.936871   54248 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 22:55:54.936947   54248 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 22:55:54.937023   54248 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:55:54.937125   54248 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:55:54.937219   54248 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:55:54.937269   54248 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 22:55:54.937333   54248 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:55:54.937395   54248 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:55:54.937460   54248 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:55:54.937551   54248 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:55:54.937620   54248 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:55:54.937744   54248 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:55:54.937846   54248 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:55:54.937894   54248 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:55:54.937990   54248 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:55:54.939409   54248 out.go:204]   - Booting up control plane ...
	I0717 22:55:54.939534   54248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:55:54.939640   54248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:55:54.939733   54248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:55:54.939867   54248 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:55:54.940059   54248 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:55:54.940157   54248 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504894 seconds
	I0717 22:55:54.940283   54248 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:55:54.940445   54248 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:55:54.940525   54248 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:55:54.940756   54248 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-571296 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:55:54.940829   54248 kubeadm.go:322] [bootstrap-token] Using token: zn3d72.w9x4plx1baw35867
	I0717 22:55:54.942338   54248 out.go:204]   - Configuring RBAC rules ...
	I0717 22:55:54.942484   54248 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:55:54.942583   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:55:54.942759   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:55:54.942920   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:55:54.943088   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:55:54.943207   54248 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:55:54.943365   54248 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:55:54.943433   54248 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:55:54.943527   54248 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:55:54.943541   54248 kubeadm.go:322] 
	I0717 22:55:54.943646   54248 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:55:54.943673   54248 kubeadm.go:322] 
	I0717 22:55:54.943765   54248 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:55:54.943774   54248 kubeadm.go:322] 
	I0717 22:55:54.943814   54248 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:55:54.943906   54248 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:55:54.943997   54248 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:55:54.944009   54248 kubeadm.go:322] 
	I0717 22:55:54.944107   54248 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 22:55:54.944121   54248 kubeadm.go:322] 
	I0717 22:55:54.944194   54248 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:55:54.944204   54248 kubeadm.go:322] 
	I0717 22:55:54.944277   54248 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:55:54.944390   54248 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:55:54.944472   54248 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:55:54.944479   54248 kubeadm.go:322] 
	I0717 22:55:54.944574   54248 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:55:54.944667   54248 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:55:54.944677   54248 kubeadm.go:322] 
	I0717 22:55:54.944778   54248 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zn3d72.w9x4plx1baw35867 \
	I0717 22:55:54.944924   54248 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:55:54.944959   54248 kubeadm.go:322] 	--control-plane 
	I0717 22:55:54.944965   54248 kubeadm.go:322] 
	I0717 22:55:54.945096   54248 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:55:54.945110   54248 kubeadm.go:322] 
	I0717 22:55:54.945206   54248 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zn3d72.w9x4plx1baw35867 \
	I0717 22:55:54.945367   54248 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:55:54.945384   54248 cni.go:84] Creating CNI manager for ""
	I0717 22:55:54.945396   54248 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:55:54.947694   54248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:55:54.092792   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:54.226690   54649 pod_ready.go:81] duration metric: took 4m0.00109908s waiting for pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace to be "Ready" ...
	E0717 22:55:54.226723   54649 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:55:54.226748   54649 pod_ready.go:38] duration metric: took 4m11.075490865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:55:54.226791   54649 kubeadm.go:640] restartCluster took 4m33.196357187s
	W0717 22:55:54.226860   54649 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 22:55:54.226891   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 22:55:54.639076   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:55:54.659284   54573 api_server.go:72] duration metric: took 4m16.00305446s to wait for apiserver process to appear ...
	I0717 22:55:54.659324   54573 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:55:54.659366   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 22:55:54.659437   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 22:55:54.698007   54573 cri.go:89] found id: "c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:54.698036   54573 cri.go:89] found id: ""
	I0717 22:55:54.698045   54573 logs.go:284] 1 containers: [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f]
	I0717 22:55:54.698104   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.704502   54573 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 22:55:54.704584   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 22:55:54.738722   54573 cri.go:89] found id: "98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:54.738752   54573 cri.go:89] found id: ""
	I0717 22:55:54.738761   54573 logs.go:284] 1 containers: [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea]
	I0717 22:55:54.738816   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.743815   54573 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 22:55:54.743888   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 22:55:54.789962   54573 cri.go:89] found id: "acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:54.789992   54573 cri.go:89] found id: ""
	I0717 22:55:54.790003   54573 logs.go:284] 1 containers: [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266]
	I0717 22:55:54.790061   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.796502   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 22:55:54.796577   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 22:55:54.840319   54573 cri.go:89] found id: "692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:54.840349   54573 cri.go:89] found id: ""
	I0717 22:55:54.840358   54573 logs.go:284] 1 containers: [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629]
	I0717 22:55:54.840418   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.847001   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 22:55:54.847074   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 22:55:54.900545   54573 cri.go:89] found id: "9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:54.900571   54573 cri.go:89] found id: ""
	I0717 22:55:54.900578   54573 logs.go:284] 1 containers: [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567]
	I0717 22:55:54.900639   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.905595   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 22:55:54.905703   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 22:55:54.940386   54573 cri.go:89] found id: "f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:54.940405   54573 cri.go:89] found id: ""
	I0717 22:55:54.940414   54573 logs.go:284] 1 containers: [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c]
	I0717 22:55:54.940471   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.947365   54573 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 22:55:54.947444   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 22:55:54.993902   54573 cri.go:89] found id: ""
	I0717 22:55:54.993930   54573 logs.go:284] 0 containers: []
	W0717 22:55:54.993942   54573 logs.go:286] No container was found matching "kindnet"
	I0717 22:55:54.993950   54573 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 22:55:54.994019   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 22:55:55.040159   54573 cri.go:89] found id: "a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:55.040184   54573 cri.go:89] found id: "4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:55.040190   54573 cri.go:89] found id: ""
	I0717 22:55:55.040198   54573 logs.go:284] 2 containers: [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6]
	I0717 22:55:55.040265   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:55.045151   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:55.050805   54573 logs.go:123] Gathering logs for kubelet ...
	I0717 22:55:55.050831   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 22:55:55.123810   54573 logs.go:123] Gathering logs for describe nodes ...
	I0717 22:55:55.123845   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 22:55:55.306589   54573 logs.go:123] Gathering logs for kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] ...
	I0717 22:55:55.306623   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:55.351035   54573 logs.go:123] Gathering logs for kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] ...
	I0717 22:55:55.351083   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:55.416647   54573 logs.go:123] Gathering logs for storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] ...
	I0717 22:55:55.416705   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:55.460413   54573 logs.go:123] Gathering logs for CRI-O ...
	I0717 22:55:55.460452   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 22:55:56.034198   54573 logs.go:123] Gathering logs for container status ...
	I0717 22:55:56.034238   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 22:55:56.073509   54573 logs.go:123] Gathering logs for dmesg ...
	I0717 22:55:56.073552   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 22:55:56.086385   54573 logs.go:123] Gathering logs for kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] ...
	I0717 22:55:56.086413   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:56.132057   54573 logs.go:123] Gathering logs for etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] ...
	I0717 22:55:56.132087   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:56.176634   54573 logs.go:123] Gathering logs for coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] ...
	I0717 22:55:56.176663   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:56.213415   54573 logs.go:123] Gathering logs for kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] ...
	I0717 22:55:56.213451   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:56.248868   54573 logs.go:123] Gathering logs for storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] ...
	I0717 22:55:56.248912   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:53.969902   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:56.470299   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:54.949399   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:55:54.984090   54248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:55:55.014819   54248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:55:55.014950   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:55.015014   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=embed-certs-571296 minikube.k8s.io/updated_at=2023_07_17T22_55_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:55.558851   54248 ops.go:34] apiserver oom_adj: -16
	I0717 22:55:55.558970   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:56.177713   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:56.677742   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:57.177957   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:57.677787   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:58.793638   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:55:58.806705   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0717 22:55:58.808953   54573 api_server.go:141] control plane version: v1.27.3
	I0717 22:55:58.808972   54573 api_server.go:131] duration metric: took 4.149642061s to wait for apiserver health ...
	I0717 22:55:58.808979   54573 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:55:58.808999   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 22:55:58.809042   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 22:55:58.840945   54573 cri.go:89] found id: "c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:58.840965   54573 cri.go:89] found id: ""
	I0717 22:55:58.840972   54573 logs.go:284] 1 containers: [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f]
	I0717 22:55:58.841028   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.845463   54573 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 22:55:58.845557   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 22:55:58.877104   54573 cri.go:89] found id: "98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:58.877134   54573 cri.go:89] found id: ""
	I0717 22:55:58.877143   54573 logs.go:284] 1 containers: [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea]
	I0717 22:55:58.877199   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.881988   54573 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 22:55:58.882060   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 22:55:58.920491   54573 cri.go:89] found id: "acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:58.920520   54573 cri.go:89] found id: ""
	I0717 22:55:58.920530   54573 logs.go:284] 1 containers: [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266]
	I0717 22:55:58.920588   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.925170   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 22:55:58.925239   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 22:55:58.970908   54573 cri.go:89] found id: "692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:58.970928   54573 cri.go:89] found id: ""
	I0717 22:55:58.970937   54573 logs.go:284] 1 containers: [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629]
	I0717 22:55:58.970988   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.976950   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 22:55:58.977005   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 22:55:59.007418   54573 cri.go:89] found id: "9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:59.007438   54573 cri.go:89] found id: ""
	I0717 22:55:59.007445   54573 logs.go:284] 1 containers: [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567]
	I0717 22:55:59.007550   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.012222   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 22:55:59.012279   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 22:55:59.048939   54573 cri.go:89] found id: "f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:59.048960   54573 cri.go:89] found id: ""
	I0717 22:55:59.048968   54573 logs.go:284] 1 containers: [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c]
	I0717 22:55:59.049023   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.053335   54573 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 22:55:59.053400   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 22:55:59.084168   54573 cri.go:89] found id: ""
	I0717 22:55:59.084198   54573 logs.go:284] 0 containers: []
	W0717 22:55:59.084208   54573 logs.go:286] No container was found matching "kindnet"
	I0717 22:55:59.084221   54573 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 22:55:59.084270   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 22:55:59.117213   54573 cri.go:89] found id: "a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:59.117237   54573 cri.go:89] found id: "4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:59.117244   54573 cri.go:89] found id: ""
	I0717 22:55:59.117252   54573 logs.go:284] 2 containers: [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6]
	I0717 22:55:59.117311   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.122816   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.127074   54573 logs.go:123] Gathering logs for dmesg ...
	I0717 22:55:59.127095   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 22:55:59.142525   54573 logs.go:123] Gathering logs for kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] ...
	I0717 22:55:59.142557   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:59.190652   54573 logs.go:123] Gathering logs for kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] ...
	I0717 22:55:59.190690   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:59.231512   54573 logs.go:123] Gathering logs for kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] ...
	I0717 22:55:59.231547   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:59.280732   54573 logs.go:123] Gathering logs for storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] ...
	I0717 22:55:59.280767   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:59.318213   54573 logs.go:123] Gathering logs for CRI-O ...
	I0717 22:55:59.318237   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 22:55:59.872973   54573 logs.go:123] Gathering logs for container status ...
	I0717 22:55:59.873017   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 22:55:59.911891   54573 logs.go:123] Gathering logs for kubelet ...
	I0717 22:55:59.911918   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 22:55:59.976450   54573 logs.go:123] Gathering logs for describe nodes ...
	I0717 22:55:59.976483   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 22:56:00.099556   54573 logs.go:123] Gathering logs for etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] ...
	I0717 22:56:00.099592   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:56:00.145447   54573 logs.go:123] Gathering logs for coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] ...
	I0717 22:56:00.145479   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:56:00.181246   54573 logs.go:123] Gathering logs for kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] ...
	I0717 22:56:00.181277   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:56:00.221127   54573 logs.go:123] Gathering logs for storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] ...
	I0717 22:56:00.221150   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:56:02.761729   54573 system_pods.go:59] 8 kube-system pods found
	I0717 22:56:02.761758   54573 system_pods.go:61] "coredns-5d78c9869d-2mpst" [7516b57f-a4cb-4e2f-995e-8e063bed22ae] Running
	I0717 22:56:02.761765   54573 system_pods.go:61] "etcd-no-preload-935524" [b663c4f9-d98e-457d-b511-435bea5e9525] Running
	I0717 22:56:02.761772   54573 system_pods.go:61] "kube-apiserver-no-preload-935524" [fb6f55fc-8705-46aa-a23c-e7870e52e542] Running
	I0717 22:56:02.761778   54573 system_pods.go:61] "kube-controller-manager-no-preload-935524" [37d43d22-e857-4a9b-b0cb-a9fc39931baa] Running
	I0717 22:56:02.761783   54573 system_pods.go:61] "kube-proxy-qhp66" [8bc95955-b7ba-41e3-ac67-604a9695f784] Running
	I0717 22:56:02.761790   54573 system_pods.go:61] "kube-scheduler-no-preload-935524" [86fef4e0-b156-421a-8d53-0d34cae2cdb3] Running
	I0717 22:56:02.761800   54573 system_pods.go:61] "metrics-server-74d5c6b9c-tlbpl" [7c478efe-4435-45dd-a688-745872fc2918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:56:02.761809   54573 system_pods.go:61] "storage-provisioner" [85812d54-7a57-430b-991e-e301f123a86a] Running
	I0717 22:56:02.761823   54573 system_pods.go:74] duration metric: took 3.952838173s to wait for pod list to return data ...
	I0717 22:56:02.761837   54573 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:56:02.764526   54573 default_sa.go:45] found service account: "default"
	I0717 22:56:02.764547   54573 default_sa.go:55] duration metric: took 2.700233ms for default service account to be created ...
	I0717 22:56:02.764556   54573 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:56:02.770288   54573 system_pods.go:86] 8 kube-system pods found
	I0717 22:56:02.770312   54573 system_pods.go:89] "coredns-5d78c9869d-2mpst" [7516b57f-a4cb-4e2f-995e-8e063bed22ae] Running
	I0717 22:56:02.770318   54573 system_pods.go:89] "etcd-no-preload-935524" [b663c4f9-d98e-457d-b511-435bea5e9525] Running
	I0717 22:56:02.770323   54573 system_pods.go:89] "kube-apiserver-no-preload-935524" [fb6f55fc-8705-46aa-a23c-e7870e52e542] Running
	I0717 22:56:02.770327   54573 system_pods.go:89] "kube-controller-manager-no-preload-935524" [37d43d22-e857-4a9b-b0cb-a9fc39931baa] Running
	I0717 22:56:02.770330   54573 system_pods.go:89] "kube-proxy-qhp66" [8bc95955-b7ba-41e3-ac67-604a9695f784] Running
	I0717 22:56:02.770334   54573 system_pods.go:89] "kube-scheduler-no-preload-935524" [86fef4e0-b156-421a-8d53-0d34cae2cdb3] Running
	I0717 22:56:02.770340   54573 system_pods.go:89] "metrics-server-74d5c6b9c-tlbpl" [7c478efe-4435-45dd-a688-745872fc2918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:56:02.770346   54573 system_pods.go:89] "storage-provisioner" [85812d54-7a57-430b-991e-e301f123a86a] Running
	I0717 22:56:02.770354   54573 system_pods.go:126] duration metric: took 5.793179ms to wait for k8s-apps to be running ...
	I0717 22:56:02.770362   54573 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:56:02.770410   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:02.786132   54573 system_svc.go:56] duration metric: took 15.760975ms WaitForService to wait for kubelet.
	I0717 22:56:02.786161   54573 kubeadm.go:581] duration metric: took 4m24.129949995s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:56:02.786182   54573 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:56:02.789957   54573 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:56:02.789978   54573 node_conditions.go:123] node cpu capacity is 2
	I0717 22:56:02.789988   54573 node_conditions.go:105] duration metric: took 3.802348ms to run NodePressure ...
	I0717 22:56:02.789999   54573 start.go:228] waiting for startup goroutines ...
	I0717 22:56:02.790008   54573 start.go:233] waiting for cluster config update ...
	I0717 22:56:02.790021   54573 start.go:242] writing updated cluster config ...
	I0717 22:56:02.790308   54573 ssh_runner.go:195] Run: rm -f paused
	I0717 22:56:02.840154   54573 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 22:56:02.843243   54573 out.go:177] * Done! kubectl is now configured to use "no-preload-935524" cluster and "default" namespace by default
	I0717 22:55:58.471229   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:00.969263   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:58.177892   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:58.677211   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:59.177916   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:59.678088   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:00.177933   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:00.678096   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:01.177184   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:01.677152   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:02.177561   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:02.677947   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:02.970089   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:05.470783   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:03.177870   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:03.677715   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:04.177238   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:04.677261   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:05.177220   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:05.678164   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:06.177948   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:06.677392   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:07.177167   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:07.678131   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:07.945881   54248 kubeadm.go:1081] duration metric: took 12.930982407s to wait for elevateKubeSystemPrivileges.
	I0717 22:56:07.945928   54248 kubeadm.go:406] StartCluster complete in 5m28.89261834s
	I0717 22:56:07.945958   54248 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:07.946058   54248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:56:07.948004   54248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:07.948298   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:56:07.948538   54248 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:56:07.948628   54248 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-571296"
	I0717 22:56:07.948639   54248 config.go:182] Loaded profile config "embed-certs-571296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:56:07.948657   54248 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-571296"
	W0717 22:56:07.948669   54248 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:56:07.948687   54248 addons.go:69] Setting default-storageclass=true in profile "embed-certs-571296"
	I0717 22:56:07.948708   54248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-571296"
	I0717 22:56:07.948713   54248 host.go:66] Checking if "embed-certs-571296" exists ...
	I0717 22:56:07.949078   54248 addons.go:69] Setting metrics-server=true in profile "embed-certs-571296"
	I0717 22:56:07.949100   54248 addons.go:231] Setting addon metrics-server=true in "embed-certs-571296"
	I0717 22:56:07.949101   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	W0717 22:56:07.949107   54248 addons.go:240] addon metrics-server should already be in state true
	I0717 22:56:07.949126   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.949148   54248 host.go:66] Checking if "embed-certs-571296" exists ...
	I0717 22:56:07.949361   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.949390   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.949481   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.949508   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.967136   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I0717 22:56:07.967705   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.967874   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43925
	I0717 22:56:07.968286   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.968317   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.968395   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.968741   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.969000   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.969019   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.969056   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:07.969416   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.969964   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.969993   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.970220   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I0717 22:56:07.970682   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.971172   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.971194   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.971603   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.972617   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.972655   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.988352   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38131
	I0717 22:56:07.988872   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.989481   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.989507   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.989913   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.990198   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:07.992174   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:56:07.992359   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34283
	I0717 22:56:07.993818   54248 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:56:07.995350   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:56:07.995373   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:56:07.995393   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:56:07.992931   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.995909   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.995933   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.996276   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.996424   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:07.998630   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:56:08.000660   54248 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:07.999385   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:07.999983   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:56:08.002498   54248 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:08.002510   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:56:08.002529   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:56:08.002556   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:56:08.002587   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:56:08.002626   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.002714   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:56:08.002874   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:56:08.003290   54248 addons.go:231] Setting addon default-storageclass=true in "embed-certs-571296"
	W0717 22:56:08.003311   54248 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:56:08.003340   54248 host.go:66] Checking if "embed-certs-571296" exists ...
	I0717 22:56:08.003736   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:08.003763   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:08.005771   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.006163   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:56:08.006194   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.006393   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:56:08.006560   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:56:08.006744   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:56:08.006890   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:56:08.025042   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0717 22:56:08.025743   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:08.026232   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:08.026252   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:08.026732   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:08.027295   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:08.027340   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:08.044326   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40863
	I0717 22:56:08.044743   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:08.045285   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:08.045309   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:08.045686   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:08.045900   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:08.047695   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:56:08.047962   54248 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:08.047980   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:56:08.048000   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:56:08.050685   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.051084   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:56:08.051115   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.051376   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:56:08.051561   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:56:08.051762   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:56:08.051880   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:56:08.221022   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:56:08.221057   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:56:08.262777   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:56:08.286077   54248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:08.301703   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:56:08.301728   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:56:08.314524   54248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:08.370967   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:08.370989   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:56:08.585011   54248 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-571296" context rescaled to 1 replicas
	I0717 22:56:08.585061   54248 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:56:08.587143   54248 out.go:177] * Verifying Kubernetes components...
	I0717 22:56:08.588842   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:08.666555   54248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:10.506154   54248 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.243338067s)
	I0717 22:56:10.506244   54248 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0717 22:56:11.016648   54248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.730514867s)
	I0717 22:56:11.016699   54248 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.427824424s)
	I0717 22:56:11.016659   54248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.702100754s)
	I0717 22:56:11.016728   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.016733   54248 node_ready.go:35] waiting up to 6m0s for node "embed-certs-571296" to be "Ready" ...
	I0717 22:56:11.016742   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.016707   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.016862   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017139   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.017150   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017165   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.017168   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017175   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.017177   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.017183   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.017186   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017196   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.017242   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017409   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017425   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.017443   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.017452   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017571   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017600   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.018689   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.018706   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.018703   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.043490   54248 node_ready.go:49] node "embed-certs-571296" has status "Ready":"True"
	I0717 22:56:11.043511   54248 node_ready.go:38] duration metric: took 26.766819ms waiting for node "embed-certs-571296" to be "Ready" ...
	I0717 22:56:11.043518   54248 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:56:11.057095   54248 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-6ljtn" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:11.116641   54248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.450034996s)
	I0717 22:56:11.116706   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.116724   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.117015   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.117034   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.117046   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.117058   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.117341   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.117389   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.117408   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.117427   54248 addons.go:467] Verifying addon metrics-server=true in "embed-certs-571296"
	I0717 22:56:11.119741   54248 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 22:56:07.979850   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:10.471118   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:12.472257   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:11.122047   54248 addons.go:502] enable addons completed in 3.173503334s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 22:56:12.605075   54248 pod_ready.go:92] pod "coredns-5d78c9869d-6ljtn" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.605111   54248 pod_ready.go:81] duration metric: took 1.547984916s waiting for pod "coredns-5d78c9869d-6ljtn" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.605126   54248 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-tq27r" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.619682   54248 pod_ready.go:92] pod "coredns-5d78c9869d-tq27r" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.619710   54248 pod_ready.go:81] duration metric: took 14.576786ms waiting for pod "coredns-5d78c9869d-tq27r" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.619722   54248 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.628850   54248 pod_ready.go:92] pod "etcd-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.628878   54248 pod_ready.go:81] duration metric: took 9.147093ms waiting for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.628889   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.641360   54248 pod_ready.go:92] pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.641381   54248 pod_ready.go:81] duration metric: took 12.485183ms waiting for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.641391   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.656634   54248 pod_ready.go:92] pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.656663   54248 pod_ready.go:81] duration metric: took 15.264878ms waiting for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.656677   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xjpds" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:14.480168   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:16.969340   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:13.530098   54248 pod_ready.go:92] pod "kube-proxy-xjpds" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:13.530129   54248 pod_ready.go:81] duration metric: took 873.444575ms waiting for pod "kube-proxy-xjpds" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:13.530144   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:13.821592   54248 pod_ready.go:92] pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:13.821615   54248 pod_ready.go:81] duration metric: took 291.46393ms waiting for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:13.821625   54248 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:16.228210   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:19.470498   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:21.969531   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:18.228289   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:20.228420   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:22.228472   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:26.250616   54649 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.023698231s)
	I0717 22:56:26.250690   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:26.264095   54649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:56:26.274295   54649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:56:26.284265   54649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:56:26.284332   54649 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 22:56:26.341601   54649 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 22:56:26.341719   54649 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:56:26.507992   54649 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:56:26.508194   54649 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:56:26.508344   54649 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:56:26.684682   54649 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:56:26.686603   54649 out.go:204]   - Generating certificates and keys ...
	I0717 22:56:26.686753   54649 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:56:26.686833   54649 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:56:26.686963   54649 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:56:26.687386   54649 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 22:56:26.687802   54649 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:56:26.688484   54649 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 22:56:26.689007   54649 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 22:56:26.689618   54649 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:56:26.690234   54649 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:56:26.690845   54649 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:56:26.691391   54649 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 22:56:26.691484   54649 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:56:26.793074   54649 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:56:26.956354   54649 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:56:27.033560   54649 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:56:27.222598   54649 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:56:27.242695   54649 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:56:27.243923   54649 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:56:27.244009   54649 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:56:27.382359   54649 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:56:27.385299   54649 out.go:204]   - Booting up control plane ...
	I0717 22:56:27.385459   54649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:56:27.385595   54649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:56:27.385699   54649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:56:27.386230   54649 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:56:27.388402   54649 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:56:24.469634   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:26.470480   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:24.231654   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:26.728390   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:28.471360   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:30.493443   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:28.728821   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:30.729474   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:32.731419   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:35.894189   54649 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505577 seconds
	I0717 22:56:35.894298   54649 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:56:35.922569   54649 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:56:36.459377   54649 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:56:36.459628   54649 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-504828 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:56:36.981248   54649 kubeadm.go:322] [bootstrap-token] Using token: aq0fl5.e7xnmbjqmeipfdlw
	I0717 22:56:36.983221   54649 out.go:204]   - Configuring RBAC rules ...
	I0717 22:56:36.983401   54649 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:56:37.001576   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:56:37.012679   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:56:37.018002   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:56:37.025356   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:56:37.030822   54649 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:56:37.049741   54649 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:56:37.309822   54649 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:56:37.414906   54649 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:56:37.414947   54649 kubeadm.go:322] 
	I0717 22:56:37.415023   54649 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:56:37.415035   54649 kubeadm.go:322] 
	I0717 22:56:37.415135   54649 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:56:37.415145   54649 kubeadm.go:322] 
	I0717 22:56:37.415190   54649 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:56:37.415290   54649 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:56:37.415373   54649 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:56:37.415383   54649 kubeadm.go:322] 
	I0717 22:56:37.415495   54649 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 22:56:37.415529   54649 kubeadm.go:322] 
	I0717 22:56:37.415593   54649 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:56:37.415602   54649 kubeadm.go:322] 
	I0717 22:56:37.415677   54649 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:56:37.415755   54649 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:56:37.415892   54649 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:56:37.415904   54649 kubeadm.go:322] 
	I0717 22:56:37.416034   54649 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:56:37.416151   54649 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:56:37.416172   54649 kubeadm.go:322] 
	I0717 22:56:37.416306   54649 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token aq0fl5.e7xnmbjqmeipfdlw \
	I0717 22:56:37.416451   54649 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:56:37.416478   54649 kubeadm.go:322] 	--control-plane 
	I0717 22:56:37.416487   54649 kubeadm.go:322] 
	I0717 22:56:37.416596   54649 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:56:37.416606   54649 kubeadm.go:322] 
	I0717 22:56:37.416708   54649 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token aq0fl5.e7xnmbjqmeipfdlw \
	I0717 22:56:37.416850   54649 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:56:37.417385   54649 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:56:37.417413   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:56:37.417426   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:56:37.419367   54649 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:56:37.421047   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:56:37.456430   54649 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:56:37.520764   54649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:56:37.520861   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:37.520877   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=default-k8s-diff-port-504828 minikube.k8s.io/updated_at=2023_07_17T22_56_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:32.970043   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:35.469085   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:35.257714   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:37.730437   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:37.914888   54649 ops.go:34] apiserver oom_adj: -16
	I0717 22:56:37.914920   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:38.508471   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:39.008147   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:39.508371   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:40.008059   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:40.508319   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:41.008945   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:41.507958   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:42.008509   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:42.508920   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:37.969711   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:39.970230   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:42.468790   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:40.227771   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:42.228268   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:43.008542   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:43.508809   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:44.008922   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:44.508771   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:45.008681   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:45.507925   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:46.008078   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:46.508950   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:47.008902   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:47.508705   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:44.470199   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:46.969467   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:44.728843   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:46.729321   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:48.008736   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:48.508008   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:49.008524   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:49.508783   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:50.008620   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:50.508131   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:50.675484   54649 kubeadm.go:1081] duration metric: took 13.154682677s to wait for elevateKubeSystemPrivileges.
	I0717 22:56:50.675522   54649 kubeadm.go:406] StartCluster complete in 5m29.688096626s
	I0717 22:56:50.675542   54649 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:50.675625   54649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:56:50.678070   54649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:50.678358   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:56:50.678397   54649 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:56:50.678485   54649 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-504828"
	I0717 22:56:50.678504   54649 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-504828"
	I0717 22:56:50.678504   54649 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-504828"
	W0717 22:56:50.678515   54649 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:56:50.678526   54649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-504828"
	I0717 22:56:50.678537   54649 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-504828"
	I0717 22:56:50.678557   54649 host.go:66] Checking if "default-k8s-diff-port-504828" exists ...
	I0717 22:56:50.678561   54649 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-504828"
	W0717 22:56:50.678571   54649 addons.go:240] addon metrics-server should already be in state true
	I0717 22:56:50.678630   54649 host.go:66] Checking if "default-k8s-diff-port-504828" exists ...
	I0717 22:56:50.678570   54649 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:56:50.678961   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.678995   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.679011   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.679039   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.678962   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.679094   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.696229   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0717 22:56:50.696669   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.697375   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.697414   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.697831   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.698436   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.698474   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.698998   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
	I0717 22:56:50.699168   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41135
	I0717 22:56:50.699382   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.699530   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.699812   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.699824   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.700021   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.700044   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.700219   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.700385   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.700570   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.700748   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.700785   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.715085   54649 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-504828"
	W0717 22:56:50.715119   54649 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:56:50.715149   54649 host.go:66] Checking if "default-k8s-diff-port-504828" exists ...
	I0717 22:56:50.715547   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.715580   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.715831   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41743
	I0717 22:56:50.716347   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.716905   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.716921   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.717285   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.717334   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41035
	I0717 22:56:50.717493   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.717699   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.718238   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.718257   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.718580   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.718843   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.719486   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:56:50.721699   54649 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:56:50.723464   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:56:50.723484   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:56:50.720832   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:56:50.723509   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:56:50.725600   54649 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:50.728061   54649 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:50.726758   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.727455   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:56:50.728105   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:56:50.728133   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:56:50.728134   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:56:50.728166   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.728380   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:56:50.728785   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:56:50.728938   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:56:50.731891   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.732348   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:56:50.732379   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.732589   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:56:50.732793   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:56:50.732974   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:56:50.733113   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:56:50.741098   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0717 22:56:50.741744   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.742386   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.742410   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.742968   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.743444   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.743490   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.759985   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38185
	I0717 22:56:50.760547   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.761145   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.761171   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.761598   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.761779   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.763276   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:56:50.763545   54649 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:50.763559   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:56:50.763574   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:56:50.766525   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.766964   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:56:50.766995   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.767254   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:56:50.767444   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:56:50.767636   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:56:50.767803   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:56:50.963671   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:56:50.963698   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:56:50.982828   54649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:50.985884   54649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:50.989077   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:56:51.020140   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:56:51.020174   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:56:51.094548   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:51.094574   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:56:51.185896   54649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:51.238666   54649 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-504828" context rescaled to 1 replicas
	I0717 22:56:51.238704   54649 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:56:51.241792   54649 out.go:177] * Verifying Kubernetes components...
	I0717 22:56:51.243720   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:49.470925   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:51.970366   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:48.732421   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:50.742608   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:52.980991   54649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.998121603s)
	I0717 22:56:52.981060   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:52.981078   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:52.981422   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:52.981424   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:52.981460   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:52.981472   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:52.981486   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:52.981815   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:52.981906   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:52.981923   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:52.981962   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:52.981979   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:52.982328   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:52.982335   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:52.982352   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.384207   54649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.398283926s)
	I0717 22:56:53.384259   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.384263   54649 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.39515958s)
	I0717 22:56:53.384272   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.384280   54649 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0717 22:56:53.384588   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:53.384664   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.384680   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.384694   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.384711   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.385419   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.385438   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.385446   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:53.810615   54649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.624668019s)
	I0717 22:56:53.810613   54649 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.5668435s)
	I0717 22:56:53.810690   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.810712   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.810717   54649 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-504828" to be "Ready" ...
	I0717 22:56:53.811092   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:53.811172   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.811191   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.811209   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.811223   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.811501   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.811519   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.811529   54649 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-504828"
	I0717 22:56:53.813588   54649 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 22:56:53.815209   54649 addons.go:502] enable addons completed in 3.136812371s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 22:56:53.848049   54649 node_ready.go:49] node "default-k8s-diff-port-504828" has status "Ready":"True"
	I0717 22:56:53.848070   54649 node_ready.go:38] duration metric: took 37.336626ms waiting for node "default-k8s-diff-port-504828" to be "Ready" ...
	I0717 22:56:53.848078   54649 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:56:53.869392   54649 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-rqcjj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.922409   54649 pod_ready.go:92] pod "coredns-5d78c9869d-rqcjj" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.922433   54649 pod_ready.go:81] duration metric: took 2.05301467s waiting for pod "coredns-5d78c9869d-rqcjj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.922442   54649 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-xz4mj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.930140   54649 pod_ready.go:92] pod "coredns-5d78c9869d-xz4mj" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.930162   54649 pod_ready.go:81] duration metric: took 7.714745ms waiting for pod "coredns-5d78c9869d-xz4mj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.930171   54649 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.938968   54649 pod_ready.go:92] pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.938994   54649 pod_ready.go:81] duration metric: took 8.813777ms waiting for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.939006   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.950100   54649 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.950127   54649 pod_ready.go:81] duration metric: took 11.110719ms waiting for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.950141   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.956205   54649 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.956228   54649 pod_ready.go:81] duration metric: took 6.078268ms waiting for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.956240   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmtc8" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.318975   54649 pod_ready.go:92] pod "kube-proxy-nmtc8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:56.319002   54649 pod_ready.go:81] duration metric: took 362.754902ms waiting for pod "kube-proxy-nmtc8" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.319012   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.725010   54649 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:56.725042   54649 pod_ready.go:81] duration metric: took 406.022192ms waiting for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.725059   54649 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:53.971176   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:56.468730   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:57.063020   53870 pod_ready.go:81] duration metric: took 4m0.001070587s waiting for pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace to be "Ready" ...
	E0717 22:56:57.063061   53870 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:56:57.063088   53870 pod_ready.go:38] duration metric: took 4m1.198793286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:56:57.063114   53870 kubeadm.go:640] restartCluster took 5m14.33125167s
	W0717 22:56:57.063164   53870 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 22:56:57.063188   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 22:56:53.230170   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:55.230713   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:57.729746   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:59.128445   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:01.628013   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:59.730555   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:02.228533   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:03.628469   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:06.127096   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:04.228878   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:06.229004   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:08.128257   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:10.128530   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:12.128706   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:10.086799   53870 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.023585108s)
	I0717 22:57:10.086877   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:57:10.102476   53870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:57:10.112904   53870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:57:10.123424   53870 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:57:10.123471   53870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0717 22:57:10.352747   53870 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:57:08.232655   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:10.730595   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:14.129308   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:16.627288   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:13.230023   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:15.730720   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:18.628332   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:20.629305   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:18.227910   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:20.228411   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:22.230069   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:23.708206   53870 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 22:57:23.708283   53870 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:57:23.708382   53870 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:57:23.708529   53870 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:57:23.708651   53870 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:57:23.708789   53870 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:57:23.708916   53870 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:57:23.708988   53870 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 22:57:23.709078   53870 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:57:23.710652   53870 out.go:204]   - Generating certificates and keys ...
	I0717 22:57:23.710759   53870 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:57:23.710840   53870 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:57:23.710959   53870 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:57:23.711058   53870 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 22:57:23.711156   53870 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:57:23.711234   53870 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 22:57:23.711314   53870 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 22:57:23.711415   53870 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:57:23.711522   53870 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:57:23.711635   53870 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:57:23.711697   53870 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 22:57:23.711776   53870 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:57:23.711831   53870 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:57:23.711892   53870 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:57:23.711978   53870 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:57:23.712048   53870 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:57:23.712136   53870 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:57:23.713799   53870 out.go:204]   - Booting up control plane ...
	I0717 22:57:23.713909   53870 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:57:23.714033   53870 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:57:23.714145   53870 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:57:23.714268   53870 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:57:23.714418   53870 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:57:23.714483   53870 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004162 seconds
	I0717 22:57:23.714656   53870 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:57:23.714846   53870 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:57:23.714929   53870 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:57:23.715088   53870 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-332820 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 22:57:23.715170   53870 kubeadm.go:322] [bootstrap-token] Using token: sjemvm.5nuhmbx5uh7jm9fo
	I0717 22:57:23.716846   53870 out.go:204]   - Configuring RBAC rules ...
	I0717 22:57:23.716937   53870 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:57:23.717067   53870 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:57:23.717210   53870 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:57:23.717333   53870 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:57:23.717414   53870 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:57:23.717456   53870 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:57:23.717494   53870 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:57:23.717501   53870 kubeadm.go:322] 
	I0717 22:57:23.717564   53870 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:57:23.717571   53870 kubeadm.go:322] 
	I0717 22:57:23.717636   53870 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:57:23.717641   53870 kubeadm.go:322] 
	I0717 22:57:23.717662   53870 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:57:23.717733   53870 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:57:23.717783   53870 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:57:23.717791   53870 kubeadm.go:322] 
	I0717 22:57:23.717839   53870 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:57:23.717946   53870 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:57:23.718040   53870 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:57:23.718052   53870 kubeadm.go:322] 
	I0717 22:57:23.718172   53870 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0717 22:57:23.718289   53870 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:57:23.718299   53870 kubeadm.go:322] 
	I0717 22:57:23.718373   53870 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sjemvm.5nuhmbx5uh7jm9fo \
	I0717 22:57:23.718476   53870 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:57:23.718525   53870 kubeadm.go:322]     --control-plane 	  
	I0717 22:57:23.718539   53870 kubeadm.go:322] 
	I0717 22:57:23.718624   53870 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:57:23.718631   53870 kubeadm.go:322] 
	I0717 22:57:23.718703   53870 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sjemvm.5nuhmbx5uh7jm9fo \
	I0717 22:57:23.718812   53870 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:57:23.718825   53870 cni.go:84] Creating CNI manager for ""
	I0717 22:57:23.718834   53870 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:57:23.720891   53870 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:57:23.128941   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:25.129405   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:27.129595   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:23.722935   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:57:23.738547   53870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:57:23.764002   53870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:57:23.764109   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=old-k8s-version-332820 minikube.k8s.io/updated_at=2023_07_17T22_57_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:23.764127   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:23.835900   53870 ops.go:34] apiserver oom_adj: -16
	I0717 22:57:24.015975   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:24.622866   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:25.122754   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:25.622733   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:26.123442   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:26.623190   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:27.123191   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:27.622408   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:24.729678   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:26.730278   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:29.629588   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:32.130357   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:28.122555   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:28.622771   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:29.122717   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:29.622760   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:30.123186   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:30.622731   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:31.122724   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:31.622957   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:32.122775   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:32.622552   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:29.228462   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:31.232382   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:34.629160   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:37.128209   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:33.122703   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:33.623262   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:34.122574   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:34.623130   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:35.122819   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:35.622426   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:36.123262   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:36.622474   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:37.122820   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:37.623414   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:33.244514   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:35.735391   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:38.123076   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:38.622497   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:39.122826   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:39.220042   53870 kubeadm.go:1081] duration metric: took 15.45599881s to wait for elevateKubeSystemPrivileges.
	I0717 22:57:39.220076   53870 kubeadm.go:406] StartCluster complete in 5m56.5464295s
	I0717 22:57:39.220095   53870 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:57:39.220173   53870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:57:39.221940   53870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:57:39.222201   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:57:39.222371   53870 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:57:39.222458   53870 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-332820"
	I0717 22:57:39.222474   53870 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-332820"
	W0717 22:57:39.222486   53870 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:57:39.222517   53870 config.go:182] Loaded profile config "old-k8s-version-332820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 22:57:39.222533   53870 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-332820"
	I0717 22:57:39.222544   53870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-332820"
	I0717 22:57:39.222528   53870 host.go:66] Checking if "old-k8s-version-332820" exists ...
	I0717 22:57:39.222906   53870 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-332820"
	I0717 22:57:39.222947   53870 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-332820"
	I0717 22:57:39.222955   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.222965   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.222978   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.222989   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0717 22:57:39.222958   53870 addons.go:240] addon metrics-server should already be in state true
	I0717 22:57:39.223266   53870 host.go:66] Checking if "old-k8s-version-332820" exists ...
	I0717 22:57:39.223611   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.223644   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.241834   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0717 22:57:39.242161   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0717 22:57:39.242290   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0717 22:57:39.242409   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.242525   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.242699   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.242983   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.242995   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.243079   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.243085   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.243146   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.243152   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.243455   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.243499   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.243923   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.243955   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.244114   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.244145   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.244609   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.244636   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.264113   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38423
	I0717 22:57:39.264664   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.265196   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.265217   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.265738   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.265990   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.267754   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:57:39.269600   53870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:57:39.269649   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37175
	I0717 22:57:39.271155   53870 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:57:39.271170   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:57:39.271196   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:57:39.271008   53870 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-332820"
	W0717 22:57:39.271246   53870 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:57:39.271278   53870 host.go:66] Checking if "old-k8s-version-332820" exists ...
	I0717 22:57:39.271539   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.271564   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.271582   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.272088   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.272112   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.272450   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.272628   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.275001   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:57:39.276178   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.276580   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:57:39.276603   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.276866   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:57:39.277046   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:57:39.277173   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:57:39.277284   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:57:39.279594   53870 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:57:39.281161   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:57:39.281178   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:57:39.281197   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:57:39.284664   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.285093   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:57:39.285126   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.285323   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:57:39.285486   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:57:39.285624   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:57:39.285731   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:57:39.291470   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0717 22:57:39.291955   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.292486   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.292509   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.292896   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.293409   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.293446   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.310134   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35331
	I0717 22:57:39.310626   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.311202   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.311227   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.311758   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.311947   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.314218   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:57:39.314495   53870 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:57:39.314506   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:57:39.314520   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:57:39.317813   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.321612   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:57:39.321659   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:57:39.321685   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.321771   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:57:39.321872   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:57:39.321963   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:57:39.410805   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:57:39.448115   53870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:57:39.468015   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:57:39.468044   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:57:39.510209   53870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:57:39.542977   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:57:39.543006   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:57:39.621799   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:57:39.621830   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:57:39.695813   53870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:57:39.820255   53870 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-332820" context rescaled to 1 replicas
	I0717 22:57:39.820293   53870 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.149 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:57:39.822441   53870 out.go:177] * Verifying Kubernetes components...
	I0717 22:57:39.824136   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:57:40.366843   53870 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0717 22:57:40.692359   53870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.244194312s)
	I0717 22:57:40.692412   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692417   53870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18217225s)
	I0717 22:57:40.692451   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692463   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.692427   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.692926   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.692941   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.692955   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.692961   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.692966   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692971   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.692977   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.692982   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692993   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.693346   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.693347   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.693360   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.693377   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.693379   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.693390   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.693391   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.693402   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.693727   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.695361   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.695382   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:41.360399   53870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.664534201s)
	I0717 22:57:41.360444   53870 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.536280934s)
	I0717 22:57:41.360477   53870 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-332820" to be "Ready" ...
	I0717 22:57:41.360484   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:41.360603   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:41.360912   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:41.360959   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:41.360976   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:41.360986   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:41.361000   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:41.361267   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:41.361323   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:41.361335   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:41.361350   53870 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-332820"
	I0717 22:57:41.364209   53870 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 22:57:39.128970   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:41.129335   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:41.365698   53870 addons.go:502] enable addons completed in 2.143322329s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 22:57:41.370307   53870 node_ready.go:49] node "old-k8s-version-332820" has status "Ready":"True"
	I0717 22:57:41.370334   53870 node_ready.go:38] duration metric: took 9.838563ms waiting for node "old-k8s-version-332820" to be "Ready" ...
	I0717 22:57:41.370345   53870 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:57:41.477919   53870 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:38.229186   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:40.229347   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:42.730552   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:43.627986   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:46.126930   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:43.515865   53870 pod_ready.go:102] pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:44.011451   53870 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-pjn9n" not found
	I0717 22:57:44.011475   53870 pod_ready.go:81] duration metric: took 2.533523466s waiting for pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace to be "Ready" ...
	E0717 22:57:44.011483   53870 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-pjn9n" not found
	I0717 22:57:44.011490   53870 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:46.023775   53870 pod_ready.go:102] pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:45.229105   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:47.727715   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:48.128141   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:50.628216   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:48.523241   53870 pod_ready.go:102] pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:50.024098   53870 pod_ready.go:92] pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:50.024118   53870 pod_ready.go:81] duration metric: took 6.012622912s waiting for pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:50.024129   53870 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dpnlw" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:50.029960   53870 pod_ready.go:92] pod "kube-proxy-dpnlw" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:50.029976   53870 pod_ready.go:81] duration metric: took 5.842404ms waiting for pod "kube-proxy-dpnlw" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:50.029985   53870 pod_ready.go:38] duration metric: took 8.659630061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:57:50.029998   53870 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:57:50.030036   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:57:50.046609   53870 api_server.go:72] duration metric: took 10.226287152s to wait for apiserver process to appear ...
	I0717 22:57:50.046634   53870 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:57:50.046654   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:57:50.053143   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 200:
	ok
	I0717 22:57:50.054242   53870 api_server.go:141] control plane version: v1.16.0
	I0717 22:57:50.054259   53870 api_server.go:131] duration metric: took 7.618888ms to wait for apiserver health ...
	I0717 22:57:50.054265   53870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:57:50.059517   53870 system_pods.go:59] 4 kube-system pods found
	I0717 22:57:50.059537   53870 system_pods.go:61] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.059542   53870 system_pods.go:61] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.059550   53870 system_pods.go:61] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.059559   53870 system_pods.go:61] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.059567   53870 system_pods.go:74] duration metric: took 5.296559ms to wait for pod list to return data ...
	I0717 22:57:50.059575   53870 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:57:50.062619   53870 default_sa.go:45] found service account: "default"
	I0717 22:57:50.062636   53870 default_sa.go:55] duration metric: took 3.055001ms for default service account to be created ...
	I0717 22:57:50.062643   53870 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:57:50.066927   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:50.066960   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.066969   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.066978   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.066987   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.067003   53870 retry.go:31] will retry after 260.087226ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:50.331854   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:50.331881   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.331886   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.331893   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.331899   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.331914   53870 retry.go:31] will retry after 352.733578ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:50.689437   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:50.689470   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.689478   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.689489   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.689497   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.689531   53870 retry.go:31] will retry after 448.974584ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:51.144027   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:51.144052   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:51.144057   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:51.144064   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:51.144072   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:51.144084   53870 retry.go:31] will retry after 388.759143ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:51.538649   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:51.538681   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:51.538690   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:51.538701   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:51.538709   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:51.538726   53870 retry.go:31] will retry after 516.772578ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:52.061223   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:52.061251   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:52.061257   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:52.061264   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:52.061270   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:52.061284   53870 retry.go:31] will retry after 640.645684ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:52.706812   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:52.706841   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:52.706848   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:52.706857   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:52.706865   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:52.706881   53870 retry.go:31] will retry after 800.353439ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:49.728135   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:51.729859   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:53.128948   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:55.628153   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:53.512660   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:53.512702   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:53.512710   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:53.512720   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:53.512729   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:53.512746   53870 retry.go:31] will retry after 1.135974065s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:54.653983   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:54.654008   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:54.654013   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:54.654021   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:54.654027   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:54.654040   53870 retry.go:31] will retry after 1.807970353s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:56.466658   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:56.466685   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:56.466690   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:56.466697   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:56.466703   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:56.466717   53870 retry.go:31] will retry after 1.738235237s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:53.729966   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:56.229195   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:58.130852   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:00.627290   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:58.210259   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:58.210286   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:58.210291   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:58.210298   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:58.210304   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:58.210318   53870 retry.go:31] will retry after 2.588058955s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:00.805164   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:00.805189   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:00.805195   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:00.805204   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:00.805212   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:00.805229   53870 retry.go:31] will retry after 2.395095199s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:58.230452   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:00.730302   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:02.627408   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:05.127023   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:03.205614   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:03.205641   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:03.205646   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:03.205654   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:03.205661   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:03.205673   53870 retry.go:31] will retry after 3.552797061s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:06.765112   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:06.765169   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:06.765189   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:06.765202   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:06.765211   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:06.765229   53870 retry.go:31] will retry after 3.62510644s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:03.229254   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:05.729500   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:07.627727   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:10.127545   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:10.396156   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:10.396185   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:10.396193   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:10.396202   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:10.396210   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:10.396234   53870 retry.go:31] will retry after 7.05504218s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:08.230115   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:10.729252   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:12.729814   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:12.627688   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:14.629102   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:17.126975   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:17.458031   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:17.458055   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:17.458060   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:17.458067   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:17.458072   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:17.458085   53870 retry.go:31] will retry after 7.079137896s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:15.228577   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:17.229657   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:19.127827   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:21.627879   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:19.733111   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:22.229170   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:24.128551   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:26.627380   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:24.542750   53870 system_pods.go:86] 5 kube-system pods found
	I0717 22:58:24.542779   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:24.542785   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:58:24.542789   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:24.542796   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:24.542801   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:24.542814   53870 retry.go:31] will retry after 10.245831604s: missing components: etcd, kube-apiserver, kube-scheduler
	I0717 22:58:24.729548   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:27.228785   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:28.627425   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:30.627791   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:29.728922   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:31.729450   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:32.628481   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:35.127509   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:37.128620   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:34.794623   53870 system_pods.go:86] 6 kube-system pods found
	I0717 22:58:34.794652   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:34.794658   53870 system_pods.go:89] "kube-apiserver-old-k8s-version-332820" [a92ec810-2496-4702-96cb-b99972aa0907] Running
	I0717 22:58:34.794662   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:58:34.794666   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:34.794673   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:34.794678   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:34.794692   53870 retry.go:31] will retry after 13.54688256s: missing components: etcd, kube-scheduler
	I0717 22:58:33.732071   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:36.230099   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:39.627130   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:41.628484   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:38.230167   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:40.728553   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:42.730438   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:44.129730   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:46.130222   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:45.228042   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:47.230684   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:48.627207   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:51.127809   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:48.348380   53870 system_pods.go:86] 8 kube-system pods found
	I0717 22:58:48.348409   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:48.348415   53870 system_pods.go:89] "etcd-old-k8s-version-332820" [2182326c-a489-44f6-a2bb-4d238d500cd4] Pending
	I0717 22:58:48.348419   53870 system_pods.go:89] "kube-apiserver-old-k8s-version-332820" [a92ec810-2496-4702-96cb-b99972aa0907] Running
	I0717 22:58:48.348424   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:58:48.348429   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:48.348433   53870 system_pods.go:89] "kube-scheduler-old-k8s-version-332820" [6145ebf3-1505-4eee-be83-b473b2d6eb16] Running
	I0717 22:58:48.348440   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:48.348448   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:48.348460   53870 retry.go:31] will retry after 11.748298579s: missing components: etcd
	I0717 22:58:49.730893   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:51.731624   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:53.131814   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:55.628315   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:54.229398   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:56.232954   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:00.104576   53870 system_pods.go:86] 8 kube-system pods found
	I0717 22:59:00.104603   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:59:00.104609   53870 system_pods.go:89] "etcd-old-k8s-version-332820" [2182326c-a489-44f6-a2bb-4d238d500cd4] Running
	I0717 22:59:00.104613   53870 system_pods.go:89] "kube-apiserver-old-k8s-version-332820" [a92ec810-2496-4702-96cb-b99972aa0907] Running
	I0717 22:59:00.104618   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:59:00.104622   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:59:00.104626   53870 system_pods.go:89] "kube-scheduler-old-k8s-version-332820" [6145ebf3-1505-4eee-be83-b473b2d6eb16] Running
	I0717 22:59:00.104632   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:59:00.104638   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:59:00.104646   53870 system_pods.go:126] duration metric: took 1m10.041998574s to wait for k8s-apps to be running ...
	I0717 22:59:00.104654   53870 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:59:00.104712   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:59:00.127311   53870 system_svc.go:56] duration metric: took 22.647393ms WaitForService to wait for kubelet.
	I0717 22:59:00.127340   53870 kubeadm.go:581] duration metric: took 1m20.307024254s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:59:00.127365   53870 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:59:00.131417   53870 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:59:00.131440   53870 node_conditions.go:123] node cpu capacity is 2
	I0717 22:59:00.131451   53870 node_conditions.go:105] duration metric: took 4.081643ms to run NodePressure ...
	I0717 22:59:00.131462   53870 start.go:228] waiting for startup goroutines ...
	I0717 22:59:00.131468   53870 start.go:233] waiting for cluster config update ...
	I0717 22:59:00.131478   53870 start.go:242] writing updated cluster config ...
	I0717 22:59:00.131776   53870 ssh_runner.go:195] Run: rm -f paused
	I0717 22:59:00.183048   53870 start.go:578] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0717 22:59:00.184945   53870 out.go:177] 
	W0717 22:59:00.186221   53870 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0717 22:59:00.187477   53870 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0717 22:59:00.188679   53870 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-332820" cluster and "default" namespace by default
	I0717 22:58:57.628894   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:59.629684   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:02.128694   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:58.730891   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:00.731091   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:04.627812   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:06.628434   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:03.230847   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:05.728807   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:07.728897   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:08.630065   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:11.128988   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:09.729866   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:12.229160   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:13.627995   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:16.128000   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:14.728745   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:16.733743   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:18.131709   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:20.628704   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:19.234979   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:21.730483   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:22.629821   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:25.127417   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:27.127827   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:24.229123   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:26.728729   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:29.629594   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:32.126711   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:28.729318   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:30.729924   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:32.731713   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:34.627629   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:37.128939   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:35.228008   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:37.233675   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:39.628990   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:41.629614   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:39.729052   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:41.730060   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:44.127514   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:46.128048   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:44.228115   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:46.229857   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:48.128761   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:50.631119   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:48.728917   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:50.730222   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:52.731295   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:53.127276   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:55.127950   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:57.128481   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:55.228655   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:57.228813   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:59.626761   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:01.628045   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:59.229493   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:01.230143   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:04.127371   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:06.128098   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:03.728770   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:06.228708   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:08.128197   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:10.626883   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:08.229060   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:10.727573   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:12.730410   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:12.628273   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:14.629361   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:17.127148   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:13.822400   54248 pod_ready.go:81] duration metric: took 4m0.000761499s waiting for pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace to be "Ready" ...
	E0717 23:00:13.822430   54248 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 23:00:13.822438   54248 pod_ready.go:38] duration metric: took 4m2.778910042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 23:00:13.822455   54248 api_server.go:52] waiting for apiserver process to appear ...
	I0717 23:00:13.822482   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:13.822546   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:13.868846   54248 cri.go:89] found id: "50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:13.868873   54248 cri.go:89] found id: ""
	I0717 23:00:13.868884   54248 logs.go:284] 1 containers: [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c]
	I0717 23:00:13.868951   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.873997   54248 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:13.874077   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:13.904386   54248 cri.go:89] found id: "e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:13.904415   54248 cri.go:89] found id: ""
	I0717 23:00:13.904425   54248 logs.go:284] 1 containers: [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2]
	I0717 23:00:13.904486   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.909075   54248 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:13.909127   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:13.940628   54248 cri.go:89] found id: "828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:13.940657   54248 cri.go:89] found id: ""
	I0717 23:00:13.940667   54248 logs.go:284] 1 containers: [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594]
	I0717 23:00:13.940721   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.945076   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:13.945132   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:13.976589   54248 cri.go:89] found id: "a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:13.976612   54248 cri.go:89] found id: ""
	I0717 23:00:13.976621   54248 logs.go:284] 1 containers: [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e]
	I0717 23:00:13.976684   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.981163   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:13.981231   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:14.018277   54248 cri.go:89] found id: "5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:14.018298   54248 cri.go:89] found id: ""
	I0717 23:00:14.018308   54248 logs.go:284] 1 containers: [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea]
	I0717 23:00:14.018370   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:14.022494   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:14.022557   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:14.055302   54248 cri.go:89] found id: "0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:14.055327   54248 cri.go:89] found id: ""
	I0717 23:00:14.055336   54248 logs.go:284] 1 containers: [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135]
	I0717 23:00:14.055388   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:14.059980   54248 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:14.060041   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:14.092467   54248 cri.go:89] found id: ""
	I0717 23:00:14.092495   54248 logs.go:284] 0 containers: []
	W0717 23:00:14.092505   54248 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:14.092512   54248 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:14.092570   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:14.127348   54248 cri.go:89] found id: "9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:14.127370   54248 cri.go:89] found id: ""
	I0717 23:00:14.127383   54248 logs.go:284] 1 containers: [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87]
	I0717 23:00:14.127438   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:14.132646   54248 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:14.132673   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:14.147882   54248 logs.go:123] Gathering logs for kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] ...
	I0717 23:00:14.147911   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:14.198417   54248 logs.go:123] Gathering logs for etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] ...
	I0717 23:00:14.198466   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:14.244734   54248 logs.go:123] Gathering logs for kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] ...
	I0717 23:00:14.244775   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:14.287920   54248 logs.go:123] Gathering logs for storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] ...
	I0717 23:00:14.287956   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:14.333785   54248 logs.go:123] Gathering logs for container status ...
	I0717 23:00:14.333820   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:14.378892   54248 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:14.378930   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 23:00:14.482292   54248 logs.go:123] Gathering logs for coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] ...
	I0717 23:00:14.482332   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:14.525418   54248 logs.go:123] Gathering logs for kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] ...
	I0717 23:00:14.525445   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:14.562013   54248 logs.go:123] Gathering logs for kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] ...
	I0717 23:00:14.562050   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:14.609917   54248 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:14.609955   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:15.088465   54248 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:15.088502   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:17.743963   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:00:17.761437   54248 api_server.go:72] duration metric: took 4m9.176341685s to wait for apiserver process to appear ...
	I0717 23:00:17.761464   54248 api_server.go:88] waiting for apiserver healthz status ...
	I0717 23:00:17.761499   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:17.761569   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:17.796097   54248 cri.go:89] found id: "50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:17.796126   54248 cri.go:89] found id: ""
	I0717 23:00:17.796136   54248 logs.go:284] 1 containers: [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c]
	I0717 23:00:17.796194   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.800256   54248 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:17.800318   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:17.830519   54248 cri.go:89] found id: "e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:17.830540   54248 cri.go:89] found id: ""
	I0717 23:00:17.830549   54248 logs.go:284] 1 containers: [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2]
	I0717 23:00:17.830597   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.835086   54248 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:17.835158   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:17.869787   54248 cri.go:89] found id: "828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:17.869810   54248 cri.go:89] found id: ""
	I0717 23:00:17.869817   54248 logs.go:284] 1 containers: [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594]
	I0717 23:00:17.869865   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.874977   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:17.875042   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:17.906026   54248 cri.go:89] found id: "a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:17.906060   54248 cri.go:89] found id: ""
	I0717 23:00:17.906070   54248 logs.go:284] 1 containers: [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e]
	I0717 23:00:17.906130   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.912549   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:17.912619   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:17.945804   54248 cri.go:89] found id: "5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:17.945832   54248 cri.go:89] found id: ""
	I0717 23:00:17.945842   54248 logs.go:284] 1 containers: [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea]
	I0717 23:00:17.945892   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.950115   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:17.950170   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:17.980790   54248 cri.go:89] found id: "0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:17.980816   54248 cri.go:89] found id: ""
	I0717 23:00:17.980825   54248 logs.go:284] 1 containers: [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135]
	I0717 23:00:17.980893   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:19.127901   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:21.628419   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:17.985352   54248 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:17.987262   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:18.019763   54248 cri.go:89] found id: ""
	I0717 23:00:18.019794   54248 logs.go:284] 0 containers: []
	W0717 23:00:18.019804   54248 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:18.019812   54248 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:18.019875   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:18.052106   54248 cri.go:89] found id: "9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:18.052135   54248 cri.go:89] found id: ""
	I0717 23:00:18.052144   54248 logs.go:284] 1 containers: [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87]
	I0717 23:00:18.052192   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:18.057066   54248 logs.go:123] Gathering logs for kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] ...
	I0717 23:00:18.057093   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:18.100637   54248 logs.go:123] Gathering logs for kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] ...
	I0717 23:00:18.100672   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:18.137149   54248 logs.go:123] Gathering logs for kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] ...
	I0717 23:00:18.137176   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:18.191633   54248 logs.go:123] Gathering logs for storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] ...
	I0717 23:00:18.191679   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:18.231765   54248 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:18.231798   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:18.250030   54248 logs.go:123] Gathering logs for kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] ...
	I0717 23:00:18.250061   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:18.312833   54248 logs.go:123] Gathering logs for etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] ...
	I0717 23:00:18.312881   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:18.357152   54248 logs.go:123] Gathering logs for coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] ...
	I0717 23:00:18.357190   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:18.388834   54248 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:18.388871   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 23:00:18.491866   54248 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:18.491898   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:18.638732   54248 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:18.638761   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:19.135753   54248 logs.go:123] Gathering logs for container status ...
	I0717 23:00:19.135788   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:21.678446   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 23:00:21.684484   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 200:
	ok
	I0717 23:00:21.686359   54248 api_server.go:141] control plane version: v1.27.3
	I0717 23:00:21.686385   54248 api_server.go:131] duration metric: took 3.924913504s to wait for apiserver health ...
	I0717 23:00:21.686395   54248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 23:00:21.686420   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:21.686476   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:21.720978   54248 cri.go:89] found id: "50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:21.721002   54248 cri.go:89] found id: ""
	I0717 23:00:21.721012   54248 logs.go:284] 1 containers: [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c]
	I0717 23:00:21.721070   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.726790   54248 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:21.726860   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:21.756975   54248 cri.go:89] found id: "e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:21.757001   54248 cri.go:89] found id: ""
	I0717 23:00:21.757011   54248 logs.go:284] 1 containers: [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2]
	I0717 23:00:21.757078   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.761611   54248 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:21.761681   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:21.795689   54248 cri.go:89] found id: "828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:21.795709   54248 cri.go:89] found id: ""
	I0717 23:00:21.795716   54248 logs.go:284] 1 containers: [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594]
	I0717 23:00:21.795767   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.800172   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:21.800236   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:21.833931   54248 cri.go:89] found id: "a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:21.833957   54248 cri.go:89] found id: ""
	I0717 23:00:21.833968   54248 logs.go:284] 1 containers: [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e]
	I0717 23:00:21.834026   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.839931   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:21.840003   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:21.874398   54248 cri.go:89] found id: "5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:21.874423   54248 cri.go:89] found id: ""
	I0717 23:00:21.874432   54248 logs.go:284] 1 containers: [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea]
	I0717 23:00:21.874489   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.878922   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:21.878986   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:21.913781   54248 cri.go:89] found id: "0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:21.913812   54248 cri.go:89] found id: ""
	I0717 23:00:21.913821   54248 logs.go:284] 1 containers: [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135]
	I0717 23:00:21.913877   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.918217   54248 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:21.918284   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:21.951832   54248 cri.go:89] found id: ""
	I0717 23:00:21.951859   54248 logs.go:284] 0 containers: []
	W0717 23:00:21.951869   54248 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:21.951876   54248 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:21.951925   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:21.987514   54248 cri.go:89] found id: "9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:21.987543   54248 cri.go:89] found id: ""
	I0717 23:00:21.987553   54248 logs.go:284] 1 containers: [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87]
	I0717 23:00:21.987617   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.992144   54248 logs.go:123] Gathering logs for container status ...
	I0717 23:00:21.992164   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:22.031685   54248 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:22.031715   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:22.046652   54248 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:22.046691   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:22.191164   54248 logs.go:123] Gathering logs for kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] ...
	I0717 23:00:22.191191   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:22.233174   54248 logs.go:123] Gathering logs for kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] ...
	I0717 23:00:22.233209   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:22.279246   54248 logs.go:123] Gathering logs for kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] ...
	I0717 23:00:22.279273   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:22.330534   54248 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:22.330565   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:22.837335   54248 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:22.837382   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 23:00:22.947015   54248 logs.go:123] Gathering logs for kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] ...
	I0717 23:00:22.947073   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:22.991731   54248 logs.go:123] Gathering logs for etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] ...
	I0717 23:00:22.991768   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:23.036115   54248 logs.go:123] Gathering logs for coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] ...
	I0717 23:00:23.036146   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:23.071825   54248 logs.go:123] Gathering logs for storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] ...
	I0717 23:00:23.071860   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:25.629247   54248 system_pods.go:59] 8 kube-system pods found
	I0717 23:00:25.629277   54248 system_pods.go:61] "coredns-5d78c9869d-6ljtn" [9488690c-8407-42ce-9938-039af0fa2c4d] Running
	I0717 23:00:25.629284   54248 system_pods.go:61] "etcd-embed-certs-571296" [e6e8b5d1-b1e7-4c3d-89d7-f44a2a6aff8b] Running
	I0717 23:00:25.629291   54248 system_pods.go:61] "kube-apiserver-embed-certs-571296" [3b5f5396-d325-445c-b3af-4cc7a506143e] Running
	I0717 23:00:25.629298   54248 system_pods.go:61] "kube-controller-manager-embed-certs-571296" [e113ffeb-97bd-4b0d-a432-b58be43b295b] Running
	I0717 23:00:25.629305   54248 system_pods.go:61] "kube-proxy-xjpds" [7c074cca-2579-4a54-bf55-77bba0fbcd34] Running
	I0717 23:00:25.629311   54248 system_pods.go:61] "kube-scheduler-embed-certs-571296" [1d192365-8c7b-4367-b4b0-fe9f6f5874af] Running
	I0717 23:00:25.629320   54248 system_pods.go:61] "metrics-server-74d5c6b9c-cknmm" [d1fb930f-518d-4ff4-94fe-7743ab55ecc6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:00:25.629331   54248 system_pods.go:61] "storage-provisioner" [1138e736-ef8d-4d24-86d5-cac3f58f0dd6] Running
	I0717 23:00:25.629339   54248 system_pods.go:74] duration metric: took 3.942938415s to wait for pod list to return data ...
	I0717 23:00:25.629347   54248 default_sa.go:34] waiting for default service account to be created ...
	I0717 23:00:25.632079   54248 default_sa.go:45] found service account: "default"
	I0717 23:00:25.632105   54248 default_sa.go:55] duration metric: took 2.751332ms for default service account to be created ...
	I0717 23:00:25.632114   54248 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 23:00:25.639267   54248 system_pods.go:86] 8 kube-system pods found
	I0717 23:00:25.639297   54248 system_pods.go:89] "coredns-5d78c9869d-6ljtn" [9488690c-8407-42ce-9938-039af0fa2c4d] Running
	I0717 23:00:25.639305   54248 system_pods.go:89] "etcd-embed-certs-571296" [e6e8b5d1-b1e7-4c3d-89d7-f44a2a6aff8b] Running
	I0717 23:00:25.639312   54248 system_pods.go:89] "kube-apiserver-embed-certs-571296" [3b5f5396-d325-445c-b3af-4cc7a506143e] Running
	I0717 23:00:25.639321   54248 system_pods.go:89] "kube-controller-manager-embed-certs-571296" [e113ffeb-97bd-4b0d-a432-b58be43b295b] Running
	I0717 23:00:25.639328   54248 system_pods.go:89] "kube-proxy-xjpds" [7c074cca-2579-4a54-bf55-77bba0fbcd34] Running
	I0717 23:00:25.639335   54248 system_pods.go:89] "kube-scheduler-embed-certs-571296" [1d192365-8c7b-4367-b4b0-fe9f6f5874af] Running
	I0717 23:00:25.639345   54248 system_pods.go:89] "metrics-server-74d5c6b9c-cknmm" [d1fb930f-518d-4ff4-94fe-7743ab55ecc6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:00:25.639353   54248 system_pods.go:89] "storage-provisioner" [1138e736-ef8d-4d24-86d5-cac3f58f0dd6] Running
	I0717 23:00:25.639362   54248 system_pods.go:126] duration metric: took 7.242476ms to wait for k8s-apps to be running ...
	I0717 23:00:25.639374   54248 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 23:00:25.639426   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:00:25.654026   54248 system_svc.go:56] duration metric: took 14.646361ms WaitForService to wait for kubelet.
	I0717 23:00:25.654049   54248 kubeadm.go:581] duration metric: took 4m17.068957071s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 23:00:25.654069   54248 node_conditions.go:102] verifying NodePressure condition ...
	I0717 23:00:25.658024   54248 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 23:00:25.658049   54248 node_conditions.go:123] node cpu capacity is 2
	I0717 23:00:25.658058   54248 node_conditions.go:105] duration metric: took 3.985859ms to run NodePressure ...
	I0717 23:00:25.658069   54248 start.go:228] waiting for startup goroutines ...
	I0717 23:00:25.658074   54248 start.go:233] waiting for cluster config update ...
	I0717 23:00:25.658083   54248 start.go:242] writing updated cluster config ...
	I0717 23:00:25.658335   54248 ssh_runner.go:195] Run: rm -f paused
	I0717 23:00:25.709576   54248 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 23:00:25.711805   54248 out.go:177] * Done! kubectl is now configured to use "embed-certs-571296" cluster and "default" namespace by default
	I0717 23:00:24.128252   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:26.130357   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:28.627639   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:30.627679   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:33.128946   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:35.627313   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:37.627998   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:40.128503   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:42.629092   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:45.126773   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:47.127774   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:49.128495   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:51.628994   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:54.127925   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:56.128908   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:56.725699   54649 pod_ready.go:81] duration metric: took 4m0.000620769s waiting for pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace to be "Ready" ...
	E0717 23:00:56.725751   54649 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 23:00:56.725769   54649 pod_ready.go:38] duration metric: took 4m2.87768055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 23:00:56.725797   54649 api_server.go:52] waiting for apiserver process to appear ...
	I0717 23:00:56.725839   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:56.725908   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:56.788229   54649 cri.go:89] found id: "45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:00:56.788257   54649 cri.go:89] found id: ""
	I0717 23:00:56.788266   54649 logs.go:284] 1 containers: [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0]
	I0717 23:00:56.788337   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.793647   54649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:56.793709   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:56.828720   54649 cri.go:89] found id: "7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:00:56.828741   54649 cri.go:89] found id: ""
	I0717 23:00:56.828748   54649 logs.go:284] 1 containers: [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab]
	I0717 23:00:56.828790   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.833266   54649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:56.833339   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:56.865377   54649 cri.go:89] found id: "30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:00:56.865407   54649 cri.go:89] found id: ""
	I0717 23:00:56.865416   54649 logs.go:284] 1 containers: [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223]
	I0717 23:00:56.865478   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.870881   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:56.870944   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:56.908871   54649 cri.go:89] found id: "4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:00:56.908891   54649 cri.go:89] found id: ""
	I0717 23:00:56.908899   54649 logs.go:284] 1 containers: [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad]
	I0717 23:00:56.908952   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.913121   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:56.913171   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:56.946752   54649 cri.go:89] found id: "a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:00:56.946797   54649 cri.go:89] found id: ""
	I0717 23:00:56.946806   54649 logs.go:284] 1 containers: [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524]
	I0717 23:00:56.946864   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.951141   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:56.951216   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:56.986967   54649 cri.go:89] found id: "7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:00:56.986987   54649 cri.go:89] found id: ""
	I0717 23:00:56.986996   54649 logs.go:284] 1 containers: [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10]
	I0717 23:00:56.987039   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.993578   54649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:56.993655   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:57.030468   54649 cri.go:89] found id: ""
	I0717 23:00:57.030491   54649 logs.go:284] 0 containers: []
	W0717 23:00:57.030498   54649 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:57.030503   54649 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:57.030548   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:57.070533   54649 cri.go:89] found id: "4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:00:57.070564   54649 cri.go:89] found id: ""
	I0717 23:00:57.070574   54649 logs.go:284] 1 containers: [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6]
	I0717 23:00:57.070632   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:57.075379   54649 logs.go:123] Gathering logs for container status ...
	I0717 23:00:57.075685   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:57.121312   54649 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:57.121343   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 23:00:57.222647   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:00:57.222960   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:00:57.251443   54649 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:57.251481   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:57.266213   54649 logs.go:123] Gathering logs for coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] ...
	I0717 23:00:57.266242   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:00:57.304032   54649 logs.go:123] Gathering logs for kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] ...
	I0717 23:00:57.304058   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:00:57.342839   54649 logs.go:123] Gathering logs for storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] ...
	I0717 23:00:57.342865   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:00:57.378086   54649 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:57.378118   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:57.893299   54649 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:57.893338   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:58.043526   54649 logs.go:123] Gathering logs for kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] ...
	I0717 23:00:58.043564   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:00:58.096422   54649 logs.go:123] Gathering logs for etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] ...
	I0717 23:00:58.096452   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:00:58.141423   54649 logs.go:123] Gathering logs for kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] ...
	I0717 23:00:58.141452   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:00:58.183755   54649 logs.go:123] Gathering logs for kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] ...
	I0717 23:00:58.183792   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:00:58.239385   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:00:58.239418   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0717 23:00:58.239479   54649 out.go:239] X Problems detected in kubelet:
	W0717 23:00:58.239506   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:00:58.239522   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:00:58.239527   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:00:58.239533   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:01:08.241689   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:01:08.259063   54649 api_server.go:72] duration metric: took 4m17.020334708s to wait for apiserver process to appear ...
	I0717 23:01:08.259090   54649 api_server.go:88] waiting for apiserver healthz status ...
	I0717 23:01:08.259125   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:01:08.259186   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:01:08.289063   54649 cri.go:89] found id: "45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:08.289080   54649 cri.go:89] found id: ""
	I0717 23:01:08.289088   54649 logs.go:284] 1 containers: [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0]
	I0717 23:01:08.289146   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.293604   54649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:01:08.293668   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:01:08.323866   54649 cri.go:89] found id: "7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:08.323889   54649 cri.go:89] found id: ""
	I0717 23:01:08.323899   54649 logs.go:284] 1 containers: [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab]
	I0717 23:01:08.324251   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.330335   54649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:01:08.330405   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:01:08.380361   54649 cri.go:89] found id: "30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:08.380387   54649 cri.go:89] found id: ""
	I0717 23:01:08.380399   54649 logs.go:284] 1 containers: [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223]
	I0717 23:01:08.380458   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.384547   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:01:08.384612   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:01:08.416767   54649 cri.go:89] found id: "4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:08.416787   54649 cri.go:89] found id: ""
	I0717 23:01:08.416793   54649 logs.go:284] 1 containers: [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad]
	I0717 23:01:08.416836   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.420982   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:01:08.421031   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:01:08.451034   54649 cri.go:89] found id: "a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:08.451064   54649 cri.go:89] found id: ""
	I0717 23:01:08.451074   54649 logs.go:284] 1 containers: [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524]
	I0717 23:01:08.451126   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.455015   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:01:08.455063   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:01:08.486539   54649 cri.go:89] found id: "7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:08.486560   54649 cri.go:89] found id: ""
	I0717 23:01:08.486567   54649 logs.go:284] 1 containers: [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10]
	I0717 23:01:08.486620   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.491106   54649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:01:08.491171   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:01:08.523068   54649 cri.go:89] found id: ""
	I0717 23:01:08.523099   54649 logs.go:284] 0 containers: []
	W0717 23:01:08.523109   54649 logs.go:286] No container was found matching "kindnet"
	I0717 23:01:08.523116   54649 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:01:08.523201   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:01:08.556090   54649 cri.go:89] found id: "4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:08.556116   54649 cri.go:89] found id: ""
	I0717 23:01:08.556125   54649 logs.go:284] 1 containers: [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6]
	I0717 23:01:08.556181   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.560278   54649 logs.go:123] Gathering logs for storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] ...
	I0717 23:01:08.560301   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:08.595021   54649 logs.go:123] Gathering logs for container status ...
	I0717 23:01:08.595052   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:01:08.640723   54649 logs.go:123] Gathering logs for dmesg ...
	I0717 23:01:08.640757   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:01:08.654641   54649 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:01:08.654679   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:01:08.789999   54649 logs.go:123] Gathering logs for kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] ...
	I0717 23:01:08.790026   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:08.837387   54649 logs.go:123] Gathering logs for coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] ...
	I0717 23:01:08.837420   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:08.871514   54649 logs.go:123] Gathering logs for kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] ...
	I0717 23:01:08.871565   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:08.911626   54649 logs.go:123] Gathering logs for kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] ...
	I0717 23:01:08.911657   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:08.961157   54649 logs.go:123] Gathering logs for kubelet ...
	I0717 23:01:08.961192   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 23:01:09.040804   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:09.040992   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:09.067178   54649 logs.go:123] Gathering logs for etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] ...
	I0717 23:01:09.067213   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:09.104138   54649 logs.go:123] Gathering logs for kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] ...
	I0717 23:01:09.104170   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:09.146623   54649 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:01:09.146653   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:01:09.681092   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:09.681128   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0717 23:01:09.681200   54649 out.go:239] X Problems detected in kubelet:
	W0717 23:01:09.681217   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:09.681229   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:09.681237   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:09.681244   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:01:19.682682   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 23:01:19.688102   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 200:
	ok
	I0717 23:01:19.689304   54649 api_server.go:141] control plane version: v1.27.3
	I0717 23:01:19.689323   54649 api_server.go:131] duration metric: took 11.430226781s to wait for apiserver health ...
	I0717 23:01:19.689330   54649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 23:01:19.689349   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:01:19.689393   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:01:19.731728   54649 cri.go:89] found id: "45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:19.731748   54649 cri.go:89] found id: ""
	I0717 23:01:19.731756   54649 logs.go:284] 1 containers: [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0]
	I0717 23:01:19.731807   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.737797   54649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:01:19.737857   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:01:19.776355   54649 cri.go:89] found id: "7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:19.776377   54649 cri.go:89] found id: ""
	I0717 23:01:19.776385   54649 logs.go:284] 1 containers: [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab]
	I0717 23:01:19.776438   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.780589   54649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:01:19.780645   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:01:19.810917   54649 cri.go:89] found id: "30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:19.810938   54649 cri.go:89] found id: ""
	I0717 23:01:19.810947   54649 logs.go:284] 1 containers: [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223]
	I0717 23:01:19.811001   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.815185   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:01:19.815252   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:01:19.852138   54649 cri.go:89] found id: "4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:19.852161   54649 cri.go:89] found id: ""
	I0717 23:01:19.852170   54649 logs.go:284] 1 containers: [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad]
	I0717 23:01:19.852225   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.856947   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:01:19.857012   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:01:19.893668   54649 cri.go:89] found id: "a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:19.893695   54649 cri.go:89] found id: ""
	I0717 23:01:19.893705   54649 logs.go:284] 1 containers: [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524]
	I0717 23:01:19.893763   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.897862   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:01:19.897915   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:01:19.935000   54649 cri.go:89] found id: "7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:19.935024   54649 cri.go:89] found id: ""
	I0717 23:01:19.935033   54649 logs.go:284] 1 containers: [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10]
	I0717 23:01:19.935097   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.939417   54649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:01:19.939487   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:01:19.971266   54649 cri.go:89] found id: ""
	I0717 23:01:19.971296   54649 logs.go:284] 0 containers: []
	W0717 23:01:19.971305   54649 logs.go:286] No container was found matching "kindnet"
	I0717 23:01:19.971313   54649 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:01:19.971374   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:01:20.007281   54649 cri.go:89] found id: "4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:20.007299   54649 cri.go:89] found id: ""
	I0717 23:01:20.007306   54649 logs.go:284] 1 containers: [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6]
	I0717 23:01:20.007351   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:20.011751   54649 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:01:20.011776   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:01:20.146025   54649 logs.go:123] Gathering logs for kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] ...
	I0717 23:01:20.146052   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:20.197984   54649 logs.go:123] Gathering logs for etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] ...
	I0717 23:01:20.198014   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:20.240729   54649 logs.go:123] Gathering logs for coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] ...
	I0717 23:01:20.240765   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:20.280904   54649 logs.go:123] Gathering logs for kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] ...
	I0717 23:01:20.280931   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:20.338648   54649 logs.go:123] Gathering logs for storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] ...
	I0717 23:01:20.338679   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:20.378549   54649 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:01:20.378586   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:01:20.858716   54649 logs.go:123] Gathering logs for kubelet ...
	I0717 23:01:20.858759   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 23:01:20.944347   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:20.944538   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:20.971487   54649 logs.go:123] Gathering logs for kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] ...
	I0717 23:01:20.971520   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:21.007705   54649 logs.go:123] Gathering logs for kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] ...
	I0717 23:01:21.007736   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:21.059674   54649 logs.go:123] Gathering logs for container status ...
	I0717 23:01:21.059703   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:01:21.095693   54649 logs.go:123] Gathering logs for dmesg ...
	I0717 23:01:21.095722   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:01:21.110247   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:21.110273   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0717 23:01:21.110336   54649 out.go:239] X Problems detected in kubelet:
	W0717 23:01:21.110354   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:21.110364   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:21.110371   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:21.110379   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:01:31.121237   54649 system_pods.go:59] 8 kube-system pods found
	I0717 23:01:31.121266   54649 system_pods.go:61] "coredns-5d78c9869d-rqcjj" [9f3bc4cf-fb20-413e-b367-27bcb997ab80] Running
	I0717 23:01:31.121272   54649 system_pods.go:61] "etcd-default-k8s-diff-port-504828" [1e432373-0f87-4cda-969e-492a8b534af0] Running
	I0717 23:01:31.121280   54649 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504828" [573bd1d1-09ff-40b5-9746-0b3fa3d51f08] Running
	I0717 23:01:31.121290   54649 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504828" [c6baeefc-57b7-4710-998c-0af932d2db14] Running
	I0717 23:01:31.121299   54649 system_pods.go:61] "kube-proxy-nmtc8" [1f8a0182-d1df-4609-86d1-7695a138e32f] Running
	I0717 23:01:31.121307   54649 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504828" [df487feb-f937-4832-ad65-38718d4325c5] Running
	I0717 23:01:31.121317   54649 system_pods.go:61] "metrics-server-74d5c6b9c-j8f2f" [328c892b-7402-480b-bc29-a316c8fb7b1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:01:31.121339   54649 system_pods.go:61] "storage-provisioner" [0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1] Running
	I0717 23:01:31.121347   54649 system_pods.go:74] duration metric: took 11.432011006s to wait for pod list to return data ...
	I0717 23:01:31.121357   54649 default_sa.go:34] waiting for default service account to be created ...
	I0717 23:01:31.124377   54649 default_sa.go:45] found service account: "default"
	I0717 23:01:31.124403   54649 default_sa.go:55] duration metric: took 3.036772ms for default service account to be created ...
	I0717 23:01:31.124413   54649 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 23:01:31.131080   54649 system_pods.go:86] 8 kube-system pods found
	I0717 23:01:31.131116   54649 system_pods.go:89] "coredns-5d78c9869d-rqcjj" [9f3bc4cf-fb20-413e-b367-27bcb997ab80] Running
	I0717 23:01:31.131125   54649 system_pods.go:89] "etcd-default-k8s-diff-port-504828" [1e432373-0f87-4cda-969e-492a8b534af0] Running
	I0717 23:01:31.131132   54649 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-504828" [573bd1d1-09ff-40b5-9746-0b3fa3d51f08] Running
	I0717 23:01:31.131140   54649 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-504828" [c6baeefc-57b7-4710-998c-0af932d2db14] Running
	I0717 23:01:31.131151   54649 system_pods.go:89] "kube-proxy-nmtc8" [1f8a0182-d1df-4609-86d1-7695a138e32f] Running
	I0717 23:01:31.131158   54649 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-504828" [df487feb-f937-4832-ad65-38718d4325c5] Running
	I0717 23:01:31.131182   54649 system_pods.go:89] "metrics-server-74d5c6b9c-j8f2f" [328c892b-7402-480b-bc29-a316c8fb7b1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:01:31.131190   54649 system_pods.go:89] "storage-provisioner" [0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1] Running
	I0717 23:01:31.131204   54649 system_pods.go:126] duration metric: took 6.785139ms to wait for k8s-apps to be running ...
	I0717 23:01:31.131211   54649 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 23:01:31.131260   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:01:31.150458   54649 system_svc.go:56] duration metric: took 19.234064ms WaitForService to wait for kubelet.
	I0717 23:01:31.150495   54649 kubeadm.go:581] duration metric: took 4m39.911769992s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 23:01:31.150523   54649 node_conditions.go:102] verifying NodePressure condition ...
	I0717 23:01:31.153677   54649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 23:01:31.153700   54649 node_conditions.go:123] node cpu capacity is 2
	I0717 23:01:31.153710   54649 node_conditions.go:105] duration metric: took 3.182344ms to run NodePressure ...
	I0717 23:01:31.153720   54649 start.go:228] waiting for startup goroutines ...
	I0717 23:01:31.153726   54649 start.go:233] waiting for cluster config update ...
	I0717 23:01:31.153737   54649 start.go:242] writing updated cluster config ...
	I0717 23:01:31.153995   54649 ssh_runner.go:195] Run: rm -f paused
	I0717 23:01:31.204028   54649 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 23:01:31.207280   54649 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-504828" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 22:50:43 UTC, ends at Mon 2023-07-17 23:05:04 UTC. --
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.610852443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5d0c8593-229f-420e-a3f3-f9daf2766ada name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.610930383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5d0c8593-229f-420e-a3f3-f9daf2766ada name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.611142267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5d0c8593-229f-420e-a3f3-f9daf2766ada name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.650939420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=97897256-a8a8-4be1-90f9-7afad165678a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.651029418Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=97897256-a8a8-4be1-90f9-7afad165678a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.651228432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=97897256-a8a8-4be1-90f9-7afad165678a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.688850076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1c802386-c251-4eae-b21a-2e7b98ab072e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.688940042Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1c802386-c251-4eae-b21a-2e7b98ab072e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.689149327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1c802386-c251-4eae-b21a-2e7b98ab072e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.713158058Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=626ed1a4-c45d-404f-aa72-60c103573834 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.713409976Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-2mpst,Uid:7516b57f-a4cb-4e2f-995e-8e063bed22ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634303654424753,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:51:35.651731739Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&PodSandboxMetadata{Name:busybox,Uid:dcf23863-eb23-4dfc-91c8-866a27d56aa7,Namespace:default,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1689634303644937736,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:51:35.651716363Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f5ca97d916e4d004b7c51e61f4548011250a8cb58c8de08eb189e2e3e508fc4,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5c6b9c-tlbpl,Uid:7c478efe-4435-45dd-a688-745872fc2918,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634300917336979,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5c6b9c-tlbpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c478efe-4435-45dd-a688-745872fc2918,k8s-app: metrics-server,pod-template-hash: 74d5c6b9c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:51:35.6517
27635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:85812d54-7a57-430b-991e-e301f123a86a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634296019814543,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-mini
kube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T22:51:35.651729418Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&PodSandboxMetadata{Name:kube-proxy-qhp66,Uid:8bc95955-b7ba-41e3-ac67-604a9695f784,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634296016001704,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b7ba-41e3-ac67-604a9695f784,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/co
nfig.seen: 2023-07-17T22:51:35.651725890Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-935524,Uid:f2fc722d6f7af09db92d907e47260519,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634289211695893,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f2fc722d6f7af09db92d907e47260519,kubernetes.io/config.seen: 2023-07-17T22:51:28.643973980Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-935524,Uid:3bae05c026731489afedf650b3c97278,Namespace:kube
-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634289197674112,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.6:8443,kubernetes.io/config.hash: 3bae05c026731489afedf650b3c97278,kubernetes.io/config.seen: 2023-07-17T22:51:28.643971934Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-935524,Uid:92baac5ff4aef0bdc09a7e86a9f715db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634289188260097,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.6:2379,kubernetes.io/config.hash: 92baac5ff4aef0bdc09a7e86a9f715db,kubernetes.io/config.seen: 2023-07-17T22:51:28.643967432Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-935524,Uid:b2084677272e90c7a54057bf2dd1092d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634289181834444,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b2084677272e90c7a54057bf2dd1092d,kubernete
s.io/config.seen: 2023-07-17T22:51:28.643973099Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=626ed1a4-c45d-404f-aa72-60c103573834 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.715074906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=54697c47-c9cb-4c4f-8cd8-7faf560d3459 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.715141059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=54697c47-c9cb-4c4f-8cd8-7faf560d3459 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.715308401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string
]string{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.
kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.
container.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[str
ing]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=54697c47-c9cb-4c4f-8cd8-7faf560d3459 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.735153982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6ed2abab-da3f-4411-b2c4-057ad52f80d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.735219764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6ed2abab-da3f-4411-b2c4-057ad52f80d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.735411517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6ed2abab-da3f-4411-b2c4-057ad52f80d4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.772077085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f585a6ae-8aca-41ba-b1d3-1bc0917a347a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.772173350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f585a6ae-8aca-41ba-b1d3-1bc0917a347a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.772613198Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f585a6ae-8aca-41ba-b1d3-1bc0917a347a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.792011254Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=6b30356e-5313-4531-b3ab-bd553909ba37 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.792326482Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-2mpst,Uid:7516b57f-a4cb-4e2f-995e-8e063bed22ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634303654424753,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:51:35.651731739Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&PodSandboxMetadata{Name:busybox,Uid:dcf23863-eb23-4dfc-91c8-866a27d56aa7,Namespace:default,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1689634303644937736,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:51:35.651716363Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f5ca97d916e4d004b7c51e61f4548011250a8cb58c8de08eb189e2e3e508fc4,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5c6b9c-tlbpl,Uid:7c478efe-4435-45dd-a688-745872fc2918,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634300917336979,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5c6b9c-tlbpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c478efe-4435-45dd-a688-745872fc2918,k8s-app: metrics-server,pod-template-hash: 74d5c6b9c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:51:35.6517
27635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:85812d54-7a57-430b-991e-e301f123a86a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634296019814543,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-mini
kube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T22:51:35.651729418Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&PodSandboxMetadata{Name:kube-proxy-qhp66,Uid:8bc95955-b7ba-41e3-ac67-604a9695f784,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634296016001704,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b7ba-41e3-ac67-604a9695f784,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/co
nfig.seen: 2023-07-17T22:51:35.651725890Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-935524,Uid:f2fc722d6f7af09db92d907e47260519,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634289211695893,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f2fc722d6f7af09db92d907e47260519,kubernetes.io/config.seen: 2023-07-17T22:51:28.643973980Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-935524,Uid:3bae05c026731489afedf650b3c97278,Namespace:kube
-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634289197674112,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.6:8443,kubernetes.io/config.hash: 3bae05c026731489afedf650b3c97278,kubernetes.io/config.seen: 2023-07-17T22:51:28.643971934Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-935524,Uid:92baac5ff4aef0bdc09a7e86a9f715db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634289188260097,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.6:2379,kubernetes.io/config.hash: 92baac5ff4aef0bdc09a7e86a9f715db,kubernetes.io/config.seen: 2023-07-17T22:51:28.643967432Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-935524,Uid:b2084677272e90c7a54057bf2dd1092d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634289181834444,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b2084677272e90c7a54057bf2dd1092d,kubernete
s.io/config.seen: 2023-07-17T22:51:28.643973099Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=6b30356e-5313-4531-b3ab-bd553909ba37 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.793835306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c9f2d713-4129-43cc-a163-27d67d232676 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.793943196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c9f2d713-4129-43cc-a163-27d67d232676 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:05:04 no-preload-935524 crio[717]: time="2023-07-17 23:05:04.794266201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c9f2d713-4129-43cc-a163-27d67d232676 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	a67aa752ac1c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   60a1553845355
	261a700a89079       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   040b35ae9ad79
	acfd42b72df4e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   f902332e9e906
	9d9c7f49bf240       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      13 minutes ago      Running             kube-proxy                1                   51533f726d16a
	4d1cbdc04001f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   60a1553845355
	98d6ff57de0a6       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      13 minutes ago      Running             etcd                      1                   4df7366612b31
	692978c127c58       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      13 minutes ago      Running             kube-scheduler            1                   9772f73a659f4
	c809651d0696d       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      13 minutes ago      Running             kube-apiserver            1                   562bec26ceed6
	f0b0c765bf6d1       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      13 minutes ago      Running             kube-controller-manager   1                   fd17cc14d6355
	
	* 
	* ==> coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43682 - 9760 "HINFO IN 8743738622397940181.1830343981996442493. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007463283s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-935524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-935524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=no-preload-935524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_43_54_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:43:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-935524
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 23:05:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 23:02:18 +0000   Mon, 17 Jul 2023 22:43:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 23:02:18 +0000   Mon, 17 Jul 2023 22:43:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 23:02:18 +0000   Mon, 17 Jul 2023 22:43:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 23:02:18 +0000   Mon, 17 Jul 2023 22:51:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    no-preload-935524
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e3c6fd294d54e4a8c1cf33a06e3109f
	  System UUID:                5e3c6fd2-94d5-4e4a-8c1c-f33a06e3109f
	  Boot ID:                    4c435d91-69b7-4bb5-af25-116bb7b7e15d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5d78c9869d-2mpst                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-no-preload-935524                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-no-preload-935524             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-no-preload-935524    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-qhp66                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-no-preload-935524             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-74d5c6b9c-tlbpl               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node no-preload-935524 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node no-preload-935524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node no-preload-935524 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-935524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-935524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-935524 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node no-preload-935524 status is now: NodeReady
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-935524 event: Registered Node no-preload-935524 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-935524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-935524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-935524 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-935524 event: Registered Node no-preload-935524 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 22:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.081235] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.519377] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.565724] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156384] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.586780] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.796455] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.133854] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.146421] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.106068] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.256775] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[Jul17 22:51] systemd-fstab-generator[1236]: Ignoring "noauto" for root device
	[ +15.358306] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] <==
	* {"level":"info","ts":"2023-07-17T22:51:33.199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgVoteResp from 6f26d2d338759d80 at term 3"}
	{"level":"info","ts":"2023-07-17T22:51:33.199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T22:51:33.199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6f26d2d338759d80 elected leader 6f26d2d338759d80 at term 3"}
	{"level":"info","ts":"2023-07-17T22:51:33.200Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6f26d2d338759d80","local-member-attributes":"{Name:no-preload-935524 ClientURLs:[https://192.168.39.6:2379]}","request-path":"/0/members/6f26d2d338759d80/attributes","cluster-id":"1a1020f766a5ac01","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:51:33.200Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:51:33.202Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T22:51:33.202Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:51:33.201Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:51:33.202Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T22:51:33.204Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.6:2379"}
	{"level":"warn","ts":"2023-07-17T22:51:39.592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.172738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/busybox.1772c970a605f3ee\" ","response":"range_response_count:1 size:685"}
	{"level":"info","ts":"2023-07-17T22:51:39.593Z","caller":"traceutil/trace.go:171","msg":"trace[1633219326] range","detail":"{range_begin:/registry/events/default/busybox.1772c970a605f3ee; range_end:; response_count:1; response_revision:580; }","duration":"121.377847ms","start":"2023-07-17T22:51:39.471Z","end":"2023-07-17T22:51:39.593Z","steps":["trace[1633219326] 'range keys from in-memory index tree'  (duration: 121.034737ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T22:51:39.736Z","caller":"traceutil/trace.go:171","msg":"trace[1705859200] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"140.678405ms","start":"2023-07-17T22:51:39.595Z","end":"2023-07-17T22:51:39.736Z","steps":["trace[1705859200] 'process raft request'  (duration: 139.656296ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T22:51:40.379Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.801888ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-935524\" ","response":"range_response_count:1 size:4587"}
	{"level":"info","ts":"2023-07-17T22:51:40.380Z","caller":"traceutil/trace.go:171","msg":"trace[772327455] range","detail":"{range_begin:/registry/minions/no-preload-935524; range_end:; response_count:1; response_revision:581; }","duration":"231.929583ms","start":"2023-07-17T22:51:40.148Z","end":"2023-07-17T22:51:40.379Z","steps":["trace[772327455] 'range keys from in-memory index tree'  (duration: 231.643154ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T22:51:40.379Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.621153ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2023-07-17T22:51:40.380Z","caller":"traceutil/trace.go:171","msg":"trace[1286174485] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:581; }","duration":"133.960507ms","start":"2023-07-17T22:51:40.246Z","end":"2023-07-17T22:51:40.380Z","steps":["trace[1286174485] 'range keys from in-memory index tree'  (duration: 133.497172ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T22:51:41.777Z","caller":"traceutil/trace.go:171","msg":"trace[1716132835] linearizableReadLoop","detail":"{readStateIndex:622; appliedIndex:621; }","duration":"129.164624ms","start":"2023-07-17T22:51:41.648Z","end":"2023-07-17T22:51:41.777Z","steps":["trace[1716132835] 'read index received'  (duration: 129.05012ms)","trace[1716132835] 'applied index is now lower than readState.Index'  (duration: 114.079µs)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T22:51:41.777Z","caller":"traceutil/trace.go:171","msg":"trace[2145821345] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"192.222451ms","start":"2023-07-17T22:51:41.585Z","end":"2023-07-17T22:51:41.777Z","steps":["trace[2145821345] 'process raft request'  (duration: 191.93746ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T22:51:41.777Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.550576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-935524\" ","response":"range_response_count:1 size:4587"}
	{"level":"info","ts":"2023-07-17T22:51:41.777Z","caller":"traceutil/trace.go:171","msg":"trace[805586029] range","detail":"{range_begin:/registry/minions/no-preload-935524; range_end:; response_count:1; response_revision:582; }","duration":"129.669793ms","start":"2023-07-17T22:51:41.648Z","end":"2023-07-17T22:51:41.777Z","steps":["trace[805586029] 'agreement among raft nodes before linearized reading'  (duration: 129.407445ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T22:51:41.909Z","caller":"traceutil/trace.go:171","msg":"trace[1806824517] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"127.6955ms","start":"2023-07-17T22:51:41.781Z","end":"2023-07-17T22:51:41.909Z","steps":["trace[1806824517] 'process raft request'  (duration: 61.71871ms)","trace[1806824517] 'compare'  (duration: 65.659395ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T23:01:33.236Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":859}
	{"level":"info","ts":"2023-07-17T23:01:33.243Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":859,"took":"5.870316ms","hash":3718668186}
	{"level":"info","ts":"2023-07-17T23:01:33.243Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3718668186,"revision":859,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  23:05:05 up 14 min,  0 users,  load average: 0.22, 0.15, 0.12
	Linux no-preload-935524 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:01:36.385817       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:01:36.385620       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:01:36.385974       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:01:36.387273       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:02:35.194887       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.173.99:443: connect: connection refused
	I0717 23:02:35.194982       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 23:02:36.386443       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:02:36.386674       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:02:36.386713       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:02:36.387620       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:02:36.387687       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:02:36.387838       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:03:35.194743       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.173.99:443: connect: connection refused
	I0717 23:03:35.194992       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 23:04:35.193993       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.173.99:443: connect: connection refused
	I0717 23:04:35.194140       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 23:04:36.386801       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:04:36.386913       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:04:36.386932       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:04:36.388006       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:04:36.388100       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:04:36.388129       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] <==
	* W0717 22:58:49.125427       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 22:59:18.758015       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 22:59:19.136770       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 22:59:48.764445       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 22:59:49.154033       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:00:18.770341       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:00:19.162837       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:00:48.776565       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:00:49.171911       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:01:18.783204       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:01:19.183242       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:01:48.790365       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:01:49.192148       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:02:18.796125       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:02:19.203030       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:02:48.801037       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:02:49.211282       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:03:18.808049       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:03:19.220636       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:03:48.814077       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:03:49.228795       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:04:18.819670       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:04:19.237644       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:04:48.826619       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:04:49.246231       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] <==
	* I0717 22:51:37.690801       1 node.go:141] Successfully retrieved node IP: 192.168.39.6
	I0717 22:51:37.691314       1 server_others.go:110] "Detected node IP" address="192.168.39.6"
	I0717 22:51:37.691656       1 server_others.go:554] "Using iptables proxy"
	I0717 22:51:37.828981       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 22:51:37.829166       1 server_others.go:192] "Using iptables Proxier"
	I0717 22:51:37.829216       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 22:51:37.829791       1 server.go:658] "Version info" version="v1.27.3"
	I0717 22:51:37.829977       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:51:37.831363       1 config.go:188] "Starting service config controller"
	I0717 22:51:37.831410       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 22:51:37.831444       1 config.go:97] "Starting endpoint slice config controller"
	I0717 22:51:37.831459       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 22:51:37.832034       1 config.go:315] "Starting node config controller"
	I0717 22:51:37.832076       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 22:51:37.931592       1 shared_informer.go:318] Caches are synced for service config
	I0717 22:51:37.931758       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 22:51:37.933663       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] <==
	* I0717 22:51:32.583205       1 serving.go:348] Generated self-signed cert in-memory
	W0717 22:51:35.314570       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 22:51:35.314742       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 22:51:35.314792       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 22:51:35.314829       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 22:51:35.403073       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 22:51:35.407100       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:51:35.424286       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 22:51:35.424760       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:51:35.428670       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 22:51:35.428806       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 22:51:35.525127       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 22:50:43 UTC, ends at Mon 2023-07-17 23:05:05 UTC. --
	Jul 17 23:02:28 no-preload-935524 kubelet[1242]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:02:28 no-preload-935524 kubelet[1242]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:02:35 no-preload-935524 kubelet[1242]: E0717 23:02:35.750408    1242 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 23:02:35 no-preload-935524 kubelet[1242]: E0717 23:02:35.750575    1242 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 23:02:35 no-preload-935524 kubelet[1242]: E0717 23:02:35.750736    1242 kuberuntime_manager.go:1212] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-v647t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod metrics-server-74d5c6b9c-tlbpl_kube-system(7c478efe-4435-45dd-a688-745872fc2918): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 23:02:35 no-preload-935524 kubelet[1242]: E0717 23:02:35.750771    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:02:47 no-preload-935524 kubelet[1242]: E0717 23:02:47.713716    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:03:01 no-preload-935524 kubelet[1242]: E0717 23:03:01.713312    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:03:14 no-preload-935524 kubelet[1242]: E0717 23:03:14.713570    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:03:28 no-preload-935524 kubelet[1242]: E0717 23:03:28.732029    1242 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 23:03:28 no-preload-935524 kubelet[1242]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:03:28 no-preload-935524 kubelet[1242]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:03:28 no-preload-935524 kubelet[1242]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:03:29 no-preload-935524 kubelet[1242]: E0717 23:03:29.713967    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:03:43 no-preload-935524 kubelet[1242]: E0717 23:03:43.713641    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:03:55 no-preload-935524 kubelet[1242]: E0717 23:03:55.713076    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:04:10 no-preload-935524 kubelet[1242]: E0717 23:04:10.713204    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:04:22 no-preload-935524 kubelet[1242]: E0717 23:04:22.712990    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:04:28 no-preload-935524 kubelet[1242]: E0717 23:04:28.731825    1242 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 23:04:28 no-preload-935524 kubelet[1242]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:04:28 no-preload-935524 kubelet[1242]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:04:28 no-preload-935524 kubelet[1242]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:04:33 no-preload-935524 kubelet[1242]: E0717 23:04:33.713235    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:04:47 no-preload-935524 kubelet[1242]: E0717 23:04:47.713279    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:05:00 no-preload-935524 kubelet[1242]: E0717 23:05:00.714325    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	
	* 
	* ==> storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] <==
	* I0717 22:51:37.419777       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 22:52:07.430666       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] <==
	* I0717 22:52:08.084080       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 22:52:08.106336       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 22:52:08.106713       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 22:52:25.512166       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 22:52:25.512331       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"86383f04-1a63-40f3-8c65-3b22e03ad414", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-935524_4336aa79-edae-47dc-b9ae-4ebd35f74e08 became leader
	I0717 22:52:25.513206       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-935524_4336aa79-edae-47dc-b9ae-4ebd35f74e08!
	I0717 22:52:25.614204       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-935524_4336aa79-edae-47dc-b9ae-4ebd35f74e08!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-935524 -n no-preload-935524
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-935524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-tlbpl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-935524 describe pod metrics-server-74d5c6b9c-tlbpl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-935524 describe pod metrics-server-74d5c6b9c-tlbpl: exit status 1 (76.766827ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-tlbpl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-935524 describe pod metrics-server-74d5c6b9c-tlbpl: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 22:59:34.942242   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-332820 -n old-k8s-version-332820
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-07-17 23:08:00.762341658 +0000 UTC m=+5244.552127652
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-332820 -n old-k8s-version-332820
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-332820 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-332820 logs -n 25: (1.655392034s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-431736                                 | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-482945                                        | pause-482945                 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-366864                              | cert-expiration-366864       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| delete  | -p                                                     | disable-driver-mounts-615088 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	|         | disable-driver-mounts-615088                           |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-431736 sudo                            | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-431736                                 | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-332820        | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-571296            | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-935524             | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-504828  | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-332820             | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-571296                 | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 23:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-935524                  | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504828       | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 22:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 23:01 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:47:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:47:37.527061   54649 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:47:37.527212   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:47:37.527221   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 22:47:37.527228   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:47:37.527438   54649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:47:37.527980   54649 out.go:303] Setting JSON to false
	I0717 22:47:37.528901   54649 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9010,"bootTime":1689625048,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:47:37.528964   54649 start.go:138] virtualization: kvm guest
	I0717 22:47:37.531211   54649 out.go:177] * [default-k8s-diff-port-504828] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:47:37.533158   54649 notify.go:220] Checking for updates...
	I0717 22:47:37.533188   54649 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:47:37.535650   54649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:47:37.537120   54649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:47:37.538622   54649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:47:37.540087   54649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:47:37.541460   54649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:47:37.543023   54649 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:47:37.543367   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:47:37.543410   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:47:37.557812   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0717 22:47:37.558215   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:47:37.558854   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:47:37.558880   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:47:37.559209   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:47:37.559422   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:47:37.559654   54649 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:47:37.559930   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:47:37.559964   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:47:37.574919   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I0717 22:47:37.575395   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:47:37.575884   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:47:37.575907   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:47:37.576216   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:47:37.576373   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:47:37.609134   54649 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 22:47:37.610479   54649 start.go:298] selected driver: kvm2
	I0717 22:47:37.610497   54649 start.go:880] validating driver "kvm2" against &{Name:default-k8s-diff-port-504828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:def
ault-k8s-diff-port-504828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:47:37.610629   54649 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:47:37.611264   54649 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:37.611363   54649 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 22:47:37.626733   54649 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 22:47:37.627071   54649 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 22:47:37.627102   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:47:37.627113   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:47:37.627123   54649 start_flags.go:319] config:
	{Name:default-k8s-diff-port-504828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-504828 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:47:37.627251   54649 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:37.629965   54649 out.go:177] * Starting control plane node default-k8s-diff-port-504828 in cluster default-k8s-diff-port-504828
	I0717 22:47:32.766201   54573 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:47:32.766339   54573 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/config.json ...
	I0717 22:47:32.766467   54573 cache.go:107] acquiring lock: {Name:mk01bc74ef42cddd6cd05b75ec900cb2a05e15de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766476   54573 cache.go:107] acquiring lock: {Name:mk672b2225edd60ecd8aa8e076d6e3579923204f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766504   54573 cache.go:107] acquiring lock: {Name:mk1ec8b402c7d0685d25060e32c2f651eb2916fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766539   54573 cache.go:107] acquiring lock: {Name:mkd18484b6a11488d3306ab3200047f68a7be660 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766573   54573 start.go:365] acquiring machines lock for no-preload-935524: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:47:32.766576   54573 cache.go:107] acquiring lock: {Name:mkb3015efe537f010ace1f299991daca38e60845 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766610   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0717 22:47:32.766586   54573 cache.go:107] acquiring lock: {Name:mkc8c0d0fa55ce47999adb3e73b20a24cafac7c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766637   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 exists
	I0717 22:47:32.766653   54573 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0" took 100.155µs
	I0717 22:47:32.766659   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0717 22:47:32.766648   54573 cache.go:107] acquiring lock: {Name:mke2add190f322b938de65cf40269b08b3acfca3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766656   54573 cache.go:107] acquiring lock: {Name:mk075beefd466e66915afc5543af4c3b175d5d80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766681   54573 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 187.554µs
	I0717 22:47:32.766710   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0717 22:47:32.766670   54573 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0717 22:47:32.766735   54573 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1" took 88.679µs
	I0717 22:47:32.766748   54573 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0717 22:47:32.766629   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0717 22:47:32.766763   54573 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3" took 231.824µs
	I0717 22:47:32.766771   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0717 22:47:32.766717   54573 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0717 22:47:32.766570   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 22:47:32.766780   54573 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3" took 194.904µs
	I0717 22:47:32.766790   54573 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0717 22:47:32.766787   54573 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 329.218µs
	I0717 22:47:32.766631   54573 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3" took 161.864µs
	I0717 22:47:32.766805   54573 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0717 22:47:32.766774   54573 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0717 22:47:32.766672   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0717 22:47:32.766820   54573 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3" took 238.693µs
	I0717 22:47:32.766828   54573 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0717 22:47:32.766797   54573 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 22:47:32.766834   54573 cache.go:87] Successfully saved all images to host disk.
	I0717 22:47:37.631294   54649 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:47:37.631336   54649 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 22:47:37.631348   54649 cache.go:57] Caching tarball of preloaded images
	I0717 22:47:37.631442   54649 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 22:47:37.631456   54649 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 22:47:37.631555   54649 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/config.json ...
	I0717 22:47:37.631742   54649 start.go:365] acquiring machines lock for default-k8s-diff-port-504828: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:47:37.905723   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:40.977774   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:47.057804   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:50.129875   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:56.209815   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:59.281810   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:05.361786   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:08.433822   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:14.513834   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:17.585682   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:23.665811   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:26.737819   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:32.817800   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:35.889839   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:41.969818   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:45.041851   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:51.121816   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:54.193896   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:00.273812   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:03.345848   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:09.425796   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:12.497873   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:18.577847   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:21.649767   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:27.729823   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:30.801947   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:36.881840   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:39.953832   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:46.033825   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:49.105862   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:55.185814   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:58.257881   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:50:04.337852   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:50:07.409871   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:50:10.413979   54248 start.go:369] acquired machines lock for "embed-certs-571296" in 3m17.321305769s
	I0717 22:50:10.414028   54248 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:50:10.414048   54248 fix.go:54] fixHost starting: 
	I0717 22:50:10.414400   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:50:10.414437   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:50:10.428711   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0717 22:50:10.429132   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:50:10.429628   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:50:10.429671   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:50:10.430088   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:50:10.430301   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:10.430491   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:50:10.432357   54248 fix.go:102] recreateIfNeeded on embed-certs-571296: state=Stopped err=<nil>
	I0717 22:50:10.432375   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	W0717 22:50:10.432552   54248 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:50:10.434264   54248 out.go:177] * Restarting existing kvm2 VM for "embed-certs-571296" ...
	I0717 22:50:10.411622   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:50:10.411707   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:50:10.413827   53870 machine.go:91] provisioned docker machine in 4m37.430605556s
	I0717 22:50:10.413860   53870 fix.go:56] fixHost completed within 4m37.451042302s
	I0717 22:50:10.413870   53870 start.go:83] releasing machines lock for "old-k8s-version-332820", held for 4m37.451061598s
	W0717 22:50:10.413907   53870 start.go:672] error starting host: provision: host is not running
	W0717 22:50:10.414004   53870 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 22:50:10.414014   53870 start.go:687] Will try again in 5 seconds ...
	I0717 22:50:10.435984   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Start
	I0717 22:50:10.436181   54248 main.go:141] libmachine: (embed-certs-571296) Ensuring networks are active...
	I0717 22:50:10.436939   54248 main.go:141] libmachine: (embed-certs-571296) Ensuring network default is active
	I0717 22:50:10.437252   54248 main.go:141] libmachine: (embed-certs-571296) Ensuring network mk-embed-certs-571296 is active
	I0717 22:50:10.437751   54248 main.go:141] libmachine: (embed-certs-571296) Getting domain xml...
	I0717 22:50:10.438706   54248 main.go:141] libmachine: (embed-certs-571296) Creating domain...
	I0717 22:50:10.795037   54248 main.go:141] libmachine: (embed-certs-571296) Waiting to get IP...
	I0717 22:50:10.795808   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:10.796178   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:10.796237   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:10.796156   55063 retry.go:31] will retry after 189.390538ms: waiting for machine to come up
	I0717 22:50:10.987904   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:10.988435   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:10.988466   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:10.988382   55063 retry.go:31] will retry after 260.75291ms: waiting for machine to come up
	I0717 22:50:11.250849   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:11.251279   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:11.251323   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:11.251218   55063 retry.go:31] will retry after 421.317262ms: waiting for machine to come up
	I0717 22:50:11.673813   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:11.674239   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:11.674259   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:11.674206   55063 retry.go:31] will retry after 512.64366ms: waiting for machine to come up
	I0717 22:50:12.188810   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:12.189271   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:12.189298   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:12.189222   55063 retry.go:31] will retry after 489.02322ms: waiting for machine to come up
	I0717 22:50:12.679695   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:12.680108   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:12.680137   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:12.680012   55063 retry.go:31] will retry after 589.269905ms: waiting for machine to come up
	I0717 22:50:15.415915   53870 start.go:365] acquiring machines lock for old-k8s-version-332820: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:50:13.270668   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:13.271039   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:13.271069   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:13.270984   55063 retry.go:31] will retry after 722.873214ms: waiting for machine to come up
	I0717 22:50:13.996101   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:13.996681   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:13.996711   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:13.996623   55063 retry.go:31] will retry after 1.381840781s: waiting for machine to come up
	I0717 22:50:15.379777   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:15.380169   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:15.380197   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:15.380118   55063 retry.go:31] will retry after 1.335563851s: waiting for machine to come up
	I0717 22:50:16.718113   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:16.718637   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:16.718660   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:16.718575   55063 retry.go:31] will retry after 1.96500286s: waiting for machine to come up
	I0717 22:50:18.685570   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:18.686003   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:18.686023   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:18.685960   55063 retry.go:31] will retry after 2.007114073s: waiting for machine to come up
	I0717 22:50:20.694500   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:20.694961   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:20.694984   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:20.694916   55063 retry.go:31] will retry after 3.344996038s: waiting for machine to come up
	I0717 22:50:24.043423   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:24.043777   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:24.043799   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:24.043732   55063 retry.go:31] will retry after 3.031269711s: waiting for machine to come up
	I0717 22:50:27.077029   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:27.077447   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:27.077493   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:27.077379   55063 retry.go:31] will retry after 3.787872248s: waiting for machine to come up
	I0717 22:50:32.158403   54573 start.go:369] acquired machines lock for "no-preload-935524" in 2m59.391772757s
	I0717 22:50:32.158456   54573 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:50:32.158478   54573 fix.go:54] fixHost starting: 
	I0717 22:50:32.158917   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:50:32.158960   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:50:32.177532   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0717 22:50:32.177962   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:50:32.178564   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:50:32.178596   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:50:32.178981   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:50:32.179197   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:32.179381   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:50:32.181079   54573 fix.go:102] recreateIfNeeded on no-preload-935524: state=Stopped err=<nil>
	I0717 22:50:32.181104   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	W0717 22:50:32.181273   54573 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:50:32.183782   54573 out.go:177] * Restarting existing kvm2 VM for "no-preload-935524" ...
	I0717 22:50:32.185307   54573 main.go:141] libmachine: (no-preload-935524) Calling .Start
	I0717 22:50:32.185504   54573 main.go:141] libmachine: (no-preload-935524) Ensuring networks are active...
	I0717 22:50:32.186119   54573 main.go:141] libmachine: (no-preload-935524) Ensuring network default is active
	I0717 22:50:32.186543   54573 main.go:141] libmachine: (no-preload-935524) Ensuring network mk-no-preload-935524 is active
	I0717 22:50:32.186958   54573 main.go:141] libmachine: (no-preload-935524) Getting domain xml...
	I0717 22:50:32.187647   54573 main.go:141] libmachine: (no-preload-935524) Creating domain...
	I0717 22:50:32.567258   54573 main.go:141] libmachine: (no-preload-935524) Waiting to get IP...
	I0717 22:50:32.568423   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:32.568941   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:32.569021   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:32.568937   55160 retry.go:31] will retry after 239.368857ms: waiting for machine to come up
	I0717 22:50:30.866978   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.867476   54248 main.go:141] libmachine: (embed-certs-571296) Found IP for machine: 192.168.61.179
	I0717 22:50:30.867494   54248 main.go:141] libmachine: (embed-certs-571296) Reserving static IP address...
	I0717 22:50:30.867507   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has current primary IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.867958   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "embed-certs-571296", mac: "52:54:00:e0:4c:e5", ip: "192.168.61.179"} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.867994   54248 main.go:141] libmachine: (embed-certs-571296) Reserved static IP address: 192.168.61.179
	I0717 22:50:30.868012   54248 main.go:141] libmachine: (embed-certs-571296) DBG | skip adding static IP to network mk-embed-certs-571296 - found existing host DHCP lease matching {name: "embed-certs-571296", mac: "52:54:00:e0:4c:e5", ip: "192.168.61.179"}
	I0717 22:50:30.868034   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Getting to WaitForSSH function...
	I0717 22:50:30.868052   54248 main.go:141] libmachine: (embed-certs-571296) Waiting for SSH to be available...
	I0717 22:50:30.870054   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.870366   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.870402   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.870514   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Using SSH client type: external
	I0717 22:50:30.870545   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa (-rw-------)
	I0717 22:50:30.870596   54248 main.go:141] libmachine: (embed-certs-571296) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:50:30.870623   54248 main.go:141] libmachine: (embed-certs-571296) DBG | About to run SSH command:
	I0717 22:50:30.870637   54248 main.go:141] libmachine: (embed-certs-571296) DBG | exit 0
	I0717 22:50:30.965028   54248 main.go:141] libmachine: (embed-certs-571296) DBG | SSH cmd err, output: <nil>: 
	I0717 22:50:30.965413   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetConfigRaw
	I0717 22:50:30.966103   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:30.968689   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.969031   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.969068   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.969282   54248 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/config.json ...
	I0717 22:50:30.969474   54248 machine.go:88] provisioning docker machine ...
	I0717 22:50:30.969491   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:30.969725   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetMachineName
	I0717 22:50:30.969910   54248 buildroot.go:166] provisioning hostname "embed-certs-571296"
	I0717 22:50:30.969928   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetMachineName
	I0717 22:50:30.970057   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:30.972055   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.972390   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.972416   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.972590   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:30.972732   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:30.972851   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:30.973006   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:30.973150   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:30.973572   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:30.973586   54248 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-571296 && echo "embed-certs-571296" | sudo tee /etc/hostname
	I0717 22:50:31.119085   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-571296
	
	I0717 22:50:31.119112   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.121962   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.122254   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.122287   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.122439   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.122634   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.122824   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.122969   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.123140   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:31.123581   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:31.123607   54248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-571296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-571296/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-571296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:50:31.262347   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:50:31.262373   54248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:50:31.262422   54248 buildroot.go:174] setting up certificates
	I0717 22:50:31.262431   54248 provision.go:83] configureAuth start
	I0717 22:50:31.262443   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetMachineName
	I0717 22:50:31.262717   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:31.265157   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.265555   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.265582   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.265716   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.267966   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.268299   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.268334   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.268482   54248 provision.go:138] copyHostCerts
	I0717 22:50:31.268529   54248 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:50:31.268538   54248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:50:31.268602   54248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:50:31.268686   54248 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:50:31.268698   54248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:50:31.268720   54248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:50:31.268769   54248 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:50:31.268776   54248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:50:31.268794   54248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:50:31.268837   54248 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.embed-certs-571296 san=[192.168.61.179 192.168.61.179 localhost 127.0.0.1 minikube embed-certs-571296]
	I0717 22:50:31.374737   54248 provision.go:172] copyRemoteCerts
	I0717 22:50:31.374796   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:50:31.374818   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.377344   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.377664   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.377700   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.377873   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.378063   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.378223   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.378364   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:31.474176   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:50:31.498974   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 22:50:31.522794   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:50:31.546276   54248 provision.go:86] duration metric: configureAuth took 283.830107ms
	I0717 22:50:31.546313   54248 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:50:31.546521   54248 config.go:182] Loaded profile config "embed-certs-571296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:50:31.546603   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.549119   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.549485   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.549544   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.549716   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.549898   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.550056   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.550206   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.550376   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:31.550819   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:31.550837   54248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:50:31.884933   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:50:31.884960   54248 machine.go:91] provisioned docker machine in 915.473611ms
	I0717 22:50:31.884973   54248 start.go:300] post-start starting for "embed-certs-571296" (driver="kvm2")
	I0717 22:50:31.884985   54248 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:50:31.885011   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:31.885399   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:50:31.885444   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.887965   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.888302   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.888338   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.888504   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.888710   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.888862   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.888988   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:31.983951   54248 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:50:31.988220   54248 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:50:31.988248   54248 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:50:31.988334   54248 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:50:31.988429   54248 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:50:31.988543   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:50:31.997933   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:50:32.020327   54248 start.go:303] post-start completed in 135.337882ms
	I0717 22:50:32.020353   54248 fix.go:56] fixHost completed within 21.60630369s
	I0717 22:50:32.020377   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:32.023026   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.023382   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.023415   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.023665   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:32.023873   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.024047   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.024193   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:32.024348   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:32.024722   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:32.024734   54248 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:50:32.158218   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634232.105028258
	
	I0717 22:50:32.158252   54248 fix.go:206] guest clock: 1689634232.105028258
	I0717 22:50:32.158262   54248 fix.go:219] Guest: 2023-07-17 22:50:32.105028258 +0000 UTC Remote: 2023-07-17 22:50:32.020356843 +0000 UTC m=+219.067919578 (delta=84.671415ms)
	I0717 22:50:32.158286   54248 fix.go:190] guest clock delta is within tolerance: 84.671415ms
	I0717 22:50:32.158292   54248 start.go:83] releasing machines lock for "embed-certs-571296", held for 21.74428315s
	I0717 22:50:32.158327   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.158592   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:32.161034   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.161385   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.161418   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.161609   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.162089   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.162247   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.162322   54248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:50:32.162368   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:32.162453   54248 ssh_runner.go:195] Run: cat /version.json
	I0717 22:50:32.162474   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:32.165101   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165235   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165564   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.165591   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165615   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.165637   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165688   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:32.165806   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:32.165877   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.165995   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.166172   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:32.166181   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:32.166307   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:32.166363   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:32.285102   54248 ssh_runner.go:195] Run: systemctl --version
	I0717 22:50:32.291185   54248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:50:32.437104   54248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:50:32.443217   54248 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:50:32.443291   54248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:50:32.461161   54248 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:50:32.461181   54248 start.go:466] detecting cgroup driver to use...
	I0717 22:50:32.461237   54248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:50:32.483011   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:50:32.497725   54248 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:50:32.497788   54248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:50:32.512008   54248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:50:32.532595   54248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:50:32.654303   54248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:50:32.783140   54248 docker.go:212] disabling docker service ...
	I0717 22:50:32.783209   54248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:50:32.795822   54248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:50:32.809540   54248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:50:32.923229   54248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:50:33.025589   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:50:33.039420   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:50:33.056769   54248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:50:33.056831   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.066205   54248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:50:33.066277   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.075559   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.084911   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.094270   54248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:50:33.103819   54248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:50:33.112005   54248 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:50:33.112070   54248 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:50:33.125459   54248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:50:33.134481   54248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:50:33.240740   54248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:50:33.418504   54248 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:50:33.418576   54248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:50:33.424143   54248 start.go:534] Will wait 60s for crictl version
	I0717 22:50:33.424202   54248 ssh_runner.go:195] Run: which crictl
	I0717 22:50:33.428330   54248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:50:33.465318   54248 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:50:33.465403   54248 ssh_runner.go:195] Run: crio --version
	I0717 22:50:33.516467   54248 ssh_runner.go:195] Run: crio --version
	I0717 22:50:33.569398   54248 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:50:32.810512   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:32.811060   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:32.811095   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:32.810988   55160 retry.go:31] will retry after 309.941434ms: waiting for machine to come up
	I0717 22:50:33.122633   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:33.123092   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:33.123138   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:33.123046   55160 retry.go:31] will retry after 487.561142ms: waiting for machine to come up
	I0717 22:50:33.611932   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:33.612512   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:33.612542   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:33.612485   55160 retry.go:31] will retry after 367.897327ms: waiting for machine to come up
	I0717 22:50:33.981820   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:33.982279   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:33.982326   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:33.982214   55160 retry.go:31] will retry after 630.28168ms: waiting for machine to come up
	I0717 22:50:34.614129   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:34.614625   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:34.614665   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:34.614569   55160 retry.go:31] will retry after 677.033607ms: waiting for machine to come up
	I0717 22:50:35.292873   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:35.293409   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:35.293443   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:35.293360   55160 retry.go:31] will retry after 1.011969157s: waiting for machine to come up
	I0717 22:50:36.306452   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:36.306895   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:36.306924   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:36.306836   55160 retry.go:31] will retry after 1.035213701s: waiting for machine to come up
	I0717 22:50:37.343727   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:37.344195   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:37.344227   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:37.344143   55160 retry.go:31] will retry after 1.820372185s: waiting for machine to come up
	I0717 22:50:33.571037   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:33.574233   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:33.574758   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:33.574796   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:33.575014   54248 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 22:50:33.579342   54248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:50:33.591600   54248 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:50:33.591678   54248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:50:33.625951   54248 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:50:33.626026   54248 ssh_runner.go:195] Run: which lz4
	I0717 22:50:33.630581   54248 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:50:33.635135   54248 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:50:33.635171   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 22:50:35.389650   54248 crio.go:444] Took 1.759110 seconds to copy over tarball
	I0717 22:50:35.389728   54248 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:50:39.166682   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:39.167111   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:39.167146   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:39.167068   55160 retry.go:31] will retry after 1.739687633s: waiting for machine to come up
	I0717 22:50:40.909258   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:40.909752   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:40.909784   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:40.909694   55160 retry.go:31] will retry after 2.476966629s: waiting for machine to come up
	I0717 22:50:38.336151   54248 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946397065s)
	I0717 22:50:38.336176   54248 crio.go:451] Took 2.946502 seconds to extract the tarball
	I0717 22:50:38.336184   54248 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:50:38.375618   54248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:50:38.425357   54248 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:50:38.425377   54248 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:50:38.425449   54248 ssh_runner.go:195] Run: crio config
	I0717 22:50:38.511015   54248 cni.go:84] Creating CNI manager for ""
	I0717 22:50:38.511040   54248 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:50:38.511050   54248 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:50:38.511067   54248 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.179 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-571296 NodeName:embed-certs-571296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:50:38.511213   54248 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-571296"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:50:38.511287   54248 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-571296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-571296 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:50:38.511340   54248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:50:38.522373   54248 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:50:38.522432   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:50:38.532894   54248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0717 22:50:38.550814   54248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:50:38.567038   54248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0717 22:50:38.583844   54248 ssh_runner.go:195] Run: grep 192.168.61.179	control-plane.minikube.internal$ /etc/hosts
	I0717 22:50:38.587687   54248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:50:38.600458   54248 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296 for IP: 192.168.61.179
	I0717 22:50:38.600490   54248 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:50:38.600617   54248 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:50:38.600659   54248 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:50:38.600721   54248 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/client.key
	I0717 22:50:38.600774   54248 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/apiserver.key.1b57fe25
	I0717 22:50:38.600820   54248 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/proxy-client.key
	I0717 22:50:38.600929   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:50:38.600955   54248 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:50:38.600966   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:50:38.600986   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:50:38.601017   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:50:38.601050   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:50:38.601093   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:50:38.601734   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:50:38.627490   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:50:38.654423   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:50:38.682997   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 22:50:38.712432   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:50:38.742901   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:50:38.768966   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:50:38.794778   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:50:38.819537   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:50:38.846730   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:50:38.870806   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:50:38.894883   54248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:50:38.911642   54248 ssh_runner.go:195] Run: openssl version
	I0717 22:50:38.917551   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:50:38.928075   54248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:50:38.932832   54248 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:50:38.932888   54248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:50:38.938574   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:50:38.948446   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:50:38.958543   54248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:50:38.963637   54248 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:50:38.963687   54248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:50:38.969460   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:50:38.979718   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:50:38.989796   54248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:50:38.994721   54248 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:50:38.994779   54248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:50:39.000394   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:50:39.011176   54248 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:50:39.016792   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:50:39.022959   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:50:39.029052   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:50:39.035096   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:50:39.040890   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:50:39.047007   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:50:39.053316   54248 kubeadm.go:404] StartCluster: {Name:embed-certs-571296 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-571296 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:50:39.053429   54248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:50:39.053479   54248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:50:39.082896   54248 cri.go:89] found id: ""
	I0717 22:50:39.082981   54248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:50:39.092999   54248 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:50:39.093021   54248 kubeadm.go:636] restartCluster start
	I0717 22:50:39.093076   54248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:50:39.102254   54248 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:39.103361   54248 kubeconfig.go:92] found "embed-certs-571296" server: "https://192.168.61.179:8443"
	I0717 22:50:39.105846   54248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:50:39.114751   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:39.114825   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:39.125574   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:39.626315   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:39.626406   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:39.637943   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:40.126535   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:40.126643   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:40.139075   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:40.626167   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:40.626306   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:40.638180   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:41.125818   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:41.125919   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:41.137569   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:41.625798   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:41.625900   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:41.637416   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:42.125972   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:42.126076   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:42.137316   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:42.625866   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:42.625964   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:42.637524   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:43.388908   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:43.389400   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:43.389434   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:43.389373   55160 retry.go:31] will retry after 2.639442454s: waiting for machine to come up
	I0717 22:50:46.032050   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:46.032476   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:46.032510   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:46.032419   55160 retry.go:31] will retry after 2.750548097s: waiting for machine to come up
	I0717 22:50:43.126317   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:43.126425   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:43.137978   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:43.626637   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:43.626751   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:43.638260   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:44.125834   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:44.125922   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:44.136925   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:44.626547   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:44.626647   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:44.638426   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:45.125978   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:45.126061   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:45.137496   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:45.626448   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:45.626511   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:45.638236   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:46.125776   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:46.125849   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:46.137916   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:46.626561   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:46.626674   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:46.638555   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:47.126090   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:47.126210   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:47.138092   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:47.626721   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:47.626802   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:47.637828   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:48.785507   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:48.785955   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:48.785987   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:48.785912   55160 retry.go:31] will retry after 4.05132206s: waiting for machine to come up
	I0717 22:50:48.126359   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:48.126438   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:48.137826   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:48.626413   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:48.626507   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:48.638354   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:49.114916   54248 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:50:49.114971   54248 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:50:49.114981   54248 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:50:49.115054   54248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:50:49.149465   54248 cri.go:89] found id: ""
	I0717 22:50:49.149558   54248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:50:49.165197   54248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:50:49.174386   54248 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:50:49.174452   54248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:50:49.183137   54248 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:50:49.183162   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:49.294495   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.169663   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.373276   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.485690   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.551312   54248 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:50:50.551389   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:51.066760   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:51.566423   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:52.066949   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:52.566304   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:54.227701   54649 start.go:369] acquired machines lock for "default-k8s-diff-port-504828" in 3m16.595911739s
	I0717 22:50:54.227764   54649 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:50:54.227786   54649 fix.go:54] fixHost starting: 
	I0717 22:50:54.228206   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:50:54.228246   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:50:54.245721   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0717 22:50:54.246143   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:50:54.246746   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:50:54.246783   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:50:54.247139   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:50:54.247353   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:50:54.247512   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:50:54.249590   54649 fix.go:102] recreateIfNeeded on default-k8s-diff-port-504828: state=Stopped err=<nil>
	I0717 22:50:54.249630   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	W0717 22:50:54.249835   54649 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:50:54.251932   54649 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-504828" ...
	I0717 22:50:52.838478   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.839101   54573 main.go:141] libmachine: (no-preload-935524) Found IP for machine: 192.168.39.6
	I0717 22:50:52.839120   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has current primary IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.839129   54573 main.go:141] libmachine: (no-preload-935524) Reserving static IP address...
	I0717 22:50:52.839689   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "no-preload-935524", mac: "52:54:00:dc:7e:aa", ip: "192.168.39.6"} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.839724   54573 main.go:141] libmachine: (no-preload-935524) DBG | skip adding static IP to network mk-no-preload-935524 - found existing host DHCP lease matching {name: "no-preload-935524", mac: "52:54:00:dc:7e:aa", ip: "192.168.39.6"}
	I0717 22:50:52.839737   54573 main.go:141] libmachine: (no-preload-935524) Reserved static IP address: 192.168.39.6
	I0717 22:50:52.839752   54573 main.go:141] libmachine: (no-preload-935524) Waiting for SSH to be available...
	I0717 22:50:52.839769   54573 main.go:141] libmachine: (no-preload-935524) DBG | Getting to WaitForSSH function...
	I0717 22:50:52.842402   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.842739   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.842773   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.842861   54573 main.go:141] libmachine: (no-preload-935524) DBG | Using SSH client type: external
	I0717 22:50:52.842889   54573 main.go:141] libmachine: (no-preload-935524) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa (-rw-------)
	I0717 22:50:52.842929   54573 main.go:141] libmachine: (no-preload-935524) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:50:52.842947   54573 main.go:141] libmachine: (no-preload-935524) DBG | About to run SSH command:
	I0717 22:50:52.842962   54573 main.go:141] libmachine: (no-preload-935524) DBG | exit 0
	I0717 22:50:52.942283   54573 main.go:141] libmachine: (no-preload-935524) DBG | SSH cmd err, output: <nil>: 
	I0717 22:50:52.942665   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetConfigRaw
	I0717 22:50:52.943403   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:52.946152   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.946546   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.946587   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.946823   54573 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/config.json ...
	I0717 22:50:52.947043   54573 machine.go:88] provisioning docker machine ...
	I0717 22:50:52.947062   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:52.947259   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetMachineName
	I0717 22:50:52.947411   54573 buildroot.go:166] provisioning hostname "no-preload-935524"
	I0717 22:50:52.947431   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetMachineName
	I0717 22:50:52.947556   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:52.950010   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.950364   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.950394   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.950539   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:52.950709   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:52.950849   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:52.950980   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:52.951165   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:52.951809   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:52.951831   54573 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-935524 && echo "no-preload-935524" | sudo tee /etc/hostname
	I0717 22:50:53.102629   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-935524
	
	I0717 22:50:53.102665   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.105306   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.105689   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.105724   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.105856   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.106048   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.106219   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.106362   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.106504   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:53.106886   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:53.106904   54573 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-935524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-935524/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-935524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:50:53.250601   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:50:53.250631   54573 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:50:53.250711   54573 buildroot.go:174] setting up certificates
	I0717 22:50:53.250721   54573 provision.go:83] configureAuth start
	I0717 22:50:53.250735   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetMachineName
	I0717 22:50:53.251063   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:53.253864   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.254309   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.254344   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.254513   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.256938   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.257385   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.257429   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.257534   54573 provision.go:138] copyHostCerts
	I0717 22:50:53.257595   54573 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:50:53.257607   54573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:50:53.257682   54573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:50:53.257804   54573 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:50:53.257816   54573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:50:53.257843   54573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:50:53.257929   54573 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:50:53.257938   54573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:50:53.257964   54573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:50:53.258060   54573 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.no-preload-935524 san=[192.168.39.6 192.168.39.6 localhost 127.0.0.1 minikube no-preload-935524]
	I0717 22:50:53.392234   54573 provision.go:172] copyRemoteCerts
	I0717 22:50:53.392307   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:50:53.392335   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.395139   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.395529   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.395560   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.395734   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.395932   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.396109   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.396268   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:53.495214   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:50:53.523550   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 22:50:53.552276   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:50:53.576026   54573 provision.go:86] duration metric: configureAuth took 325.291158ms
	I0717 22:50:53.576057   54573 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:50:53.576313   54573 config.go:182] Loaded profile config "no-preload-935524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:50:53.576414   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.578969   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.579363   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.579404   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.579585   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.579783   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.579943   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.580113   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.580302   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:53.580952   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:53.580979   54573 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:50:53.948696   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:50:53.948725   54573 machine.go:91] provisioned docker machine in 1.001666705s
	I0717 22:50:53.948737   54573 start.go:300] post-start starting for "no-preload-935524" (driver="kvm2")
	I0717 22:50:53.948756   54573 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:50:53.948788   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:53.949144   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:50:53.949179   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.951786   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.952221   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.952255   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.952468   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.952642   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.952863   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.953001   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:54.054995   54573 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:50:54.060431   54573 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:50:54.060455   54573 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:50:54.060524   54573 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:50:54.060624   54573 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:50:54.060737   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:50:54.072249   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:50:54.094894   54573 start.go:303] post-start completed in 146.143243ms
	I0717 22:50:54.094919   54573 fix.go:56] fixHost completed within 21.936441056s
	I0717 22:50:54.094937   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:54.097560   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.097893   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.097926   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.098153   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:54.098377   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.098561   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.098729   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:54.098899   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:54.099308   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:54.099323   54573 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:50:54.227537   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634254.168158155
	
	I0717 22:50:54.227562   54573 fix.go:206] guest clock: 1689634254.168158155
	I0717 22:50:54.227573   54573 fix.go:219] Guest: 2023-07-17 22:50:54.168158155 +0000 UTC Remote: 2023-07-17 22:50:54.094922973 +0000 UTC m=+201.463147612 (delta=73.235182ms)
	I0717 22:50:54.227598   54573 fix.go:190] guest clock delta is within tolerance: 73.235182ms
	I0717 22:50:54.227604   54573 start.go:83] releasing machines lock for "no-preload-935524", held for 22.06917115s
	I0717 22:50:54.227636   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.227891   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:54.230831   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.231223   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.231262   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.231367   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.231932   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.232109   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.232181   54573 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:50:54.232226   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:54.232322   54573 ssh_runner.go:195] Run: cat /version.json
	I0717 22:50:54.232354   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:54.235001   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235351   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235429   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.235463   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235600   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:54.235791   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.235825   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.235857   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235969   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:54.236027   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:54.236119   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.236253   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:54.236254   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:54.236392   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:54.360160   54573 ssh_runner.go:195] Run: systemctl --version
	I0717 22:50:54.367093   54573 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:50:54.523956   54573 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:50:54.531005   54573 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:50:54.531121   54573 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:50:54.548669   54573 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:50:54.548697   54573 start.go:466] detecting cgroup driver to use...
	I0717 22:50:54.548768   54573 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:50:54.564722   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:50:54.577237   54573 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:50:54.577303   54573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:50:54.590625   54573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:50:54.603897   54573 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:50:54.731958   54573 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:50:54.862565   54573 docker.go:212] disabling docker service ...
	I0717 22:50:54.862632   54573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:50:54.875946   54573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:50:54.888617   54573 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:50:54.997410   54573 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:50:55.110094   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:50:55.123729   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:50:55.144670   54573 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:50:55.144754   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.154131   54573 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:50:55.154193   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.164669   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.177189   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.189292   54573 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:50:55.204022   54573 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:50:55.212942   54573 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:50:55.213006   54573 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:50:55.232951   54573 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:50:55.246347   54573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:50:55.366491   54573 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:50:55.544250   54573 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:50:55.544336   54573 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:50:55.550952   54573 start.go:534] Will wait 60s for crictl version
	I0717 22:50:55.551021   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:55.558527   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:50:55.602591   54573 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:50:55.602687   54573 ssh_runner.go:195] Run: crio --version
	I0717 22:50:55.663719   54573 ssh_runner.go:195] Run: crio --version
	I0717 22:50:55.726644   54573 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:50:54.253440   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Start
	I0717 22:50:54.253678   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Ensuring networks are active...
	I0717 22:50:54.254444   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Ensuring network default is active
	I0717 22:50:54.254861   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Ensuring network mk-default-k8s-diff-port-504828 is active
	I0717 22:50:54.255337   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Getting domain xml...
	I0717 22:50:54.256194   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Creating domain...
	I0717 22:50:54.643844   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting to get IP...
	I0717 22:50:54.644894   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.645362   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.645465   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:54.645359   55310 retry.go:31] will retry after 296.655364ms: waiting for machine to come up
	I0717 22:50:54.943927   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.944465   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.944500   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:54.944408   55310 retry.go:31] will retry after 351.801959ms: waiting for machine to come up
	I0717 22:50:55.298164   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.298678   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.298710   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:55.298642   55310 retry.go:31] will retry after 354.726659ms: waiting for machine to come up
	I0717 22:50:55.655122   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.655582   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.655710   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:55.655633   55310 retry.go:31] will retry after 540.353024ms: waiting for machine to come up
	I0717 22:50:56.197370   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.197929   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.197963   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:56.197897   55310 retry.go:31] will retry after 602.667606ms: waiting for machine to come up
	I0717 22:50:56.802746   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.803401   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.803431   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:56.803344   55310 retry.go:31] will retry after 675.557445ms: waiting for machine to come up
	I0717 22:50:57.480002   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:57.480476   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:57.480508   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:57.480423   55310 retry.go:31] will retry after 898.307594ms: waiting for machine to come up
	I0717 22:50:55.728247   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:55.731423   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:55.731871   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:55.731910   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:55.732109   54573 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 22:50:55.736921   54573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:50:55.751844   54573 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:50:55.751895   54573 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:50:55.787286   54573 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:50:55.787316   54573 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 22:50:55.787387   54573 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:55.787398   54573 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:55.787418   54573 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:55.787450   54573 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:55.787589   54573 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:55.787602   54573 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0717 22:50:55.787630   54573 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:55.787648   54573 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:55.788865   54573 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:55.788870   54573 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0717 22:50:55.788875   54573 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:55.788919   54573 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:55.788929   54573 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:55.788869   54573 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:55.788955   54573 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:55.789279   54573 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:55.956462   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:55.959183   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:55.960353   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:55.961871   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:55.963472   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0717 22:50:55.970739   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:55.992476   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:56.099305   54573 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0717 22:50:56.099353   54573 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:56.099399   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.144906   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:56.175359   54573 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0717 22:50:56.175407   54573 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:56.175409   54573 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0717 22:50:56.175444   54573 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:56.175508   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.175549   54573 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0717 22:50:56.175452   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.175577   54573 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:56.175622   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.205829   54573 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0717 22:50:56.205877   54573 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:56.205929   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.205962   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:56.205875   54573 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0717 22:50:56.206017   54573 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:56.206039   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.230299   54573 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 22:50:56.230358   54573 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:56.230406   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.230508   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:56.230526   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:56.230585   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:56.230619   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:56.280737   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:56.280740   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0717 22:50:56.280876   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 22:50:56.346096   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0717 22:50:56.346185   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0717 22:50:56.346213   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0717 22:50:56.346257   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0717 22:50:56.346281   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0717 22:50:56.346325   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:56.346360   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0717 22:50:56.346370   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 22:50:56.346409   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 22:50:56.361471   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0717 22:50:56.361511   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0717 22:50:56.361546   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 22:50:56.361605   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 22:50:56.361606   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 22:50:56.410058   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 22:50:56.410140   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0717 22:50:56.410177   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:50:56.410222   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0717 22:50:56.410317   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0717 22:50:56.410389   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0717 22:50:53.066719   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:53.096978   54248 api_server.go:72] duration metric: took 2.545662837s to wait for apiserver process to appear ...
	I0717 22:50:53.097002   54248 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:50:53.097021   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:57.043968   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:50:57.044010   54248 api_server.go:103] status: https://192.168.61.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:50:57.544722   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:57.550687   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:50:57.550718   54248 api_server.go:103] status: https://192.168.61.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:50:58.045135   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:58.058934   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:50:58.058970   54248 api_server.go:103] status: https://192.168.61.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:50:58.544766   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:58.550628   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 200:
	ok
	I0717 22:50:58.559879   54248 api_server.go:141] control plane version: v1.27.3
	I0717 22:50:58.559912   54248 api_server.go:131] duration metric: took 5.462902985s to wait for apiserver health ...
	I0717 22:50:58.559925   54248 cni.go:84] Creating CNI manager for ""
	I0717 22:50:58.559936   54248 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:50:58.605706   54248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:50:58.380501   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:58.380825   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:58.380842   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:58.380780   55310 retry.go:31] will retry after 1.23430246s: waiting for machine to come up
	I0717 22:50:59.617145   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:59.617808   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:59.617841   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:59.617730   55310 retry.go:31] will retry after 1.214374623s: waiting for machine to come up
	I0717 22:51:00.834129   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:00.834639   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:00.834680   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:00.834594   55310 retry.go:31] will retry after 1.950432239s: waiting for machine to come up
	I0717 22:50:58.680414   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (2.318705948s)
	I0717 22:50:58.680448   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0717 22:50:58.680485   54573 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3: (2.318846109s)
	I0717 22:50:58.680525   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0717 22:50:58.680548   54573 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.270351678s)
	I0717 22:50:58.680595   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 22:50:58.680614   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 22:50:58.680674   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 22:51:01.356090   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (2.675377242s)
	I0717 22:51:01.356124   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0717 22:51:01.356174   54573 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0717 22:51:01.356232   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0717 22:50:58.607184   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:50:58.656720   54248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:50:58.740705   54248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:50:58.760487   54248 system_pods.go:59] 8 kube-system pods found
	I0717 22:50:58.760530   54248 system_pods.go:61] "coredns-5d78c9869d-pwd8q" [f8079ab4-1d34-4847-bdb9-7d0a500ed732] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:50:58.760542   54248 system_pods.go:61] "etcd-embed-certs-571296" [e2a4f2bb-a767-484f-9339-7024168bb59d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:50:58.760553   54248 system_pods.go:61] "kube-apiserver-embed-certs-571296" [313d49ba-2814-49e7-8b97-9c278fd33686] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:50:58.760600   54248 system_pods.go:61] "kube-controller-manager-embed-certs-571296" [03ede9e6-f06a-45a2-bafc-0ae24db96be8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:50:58.760720   54248 system_pods.go:61] "kube-proxy-kpt5d" [109fb9ce-61ab-46b0-aaf8-478d61c16fe9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:50:58.760754   54248 system_pods.go:61] "kube-scheduler-embed-certs-571296" [a10941b1-ac81-4224-bc9e-89228ad3d5c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:50:58.760765   54248 system_pods.go:61] "metrics-server-74d5c6b9c-jl7jl" [251ed989-12c1-49e5-bec1-114c3548c8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:50:58.760784   54248 system_pods.go:61] "storage-provisioner" [fb7f6371-8788-4037-8eaf-6dc2189102ec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 22:50:58.760795   54248 system_pods.go:74] duration metric: took 20.068616ms to wait for pod list to return data ...
	I0717 22:50:58.760807   54248 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:50:58.777293   54248 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:50:58.777328   54248 node_conditions.go:123] node cpu capacity is 2
	I0717 22:50:58.777343   54248 node_conditions.go:105] duration metric: took 16.528777ms to run NodePressure ...
	I0717 22:50:58.777364   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:59.270627   54248 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:50:59.277045   54248 kubeadm.go:787] kubelet initialised
	I0717 22:50:59.277074   54248 kubeadm.go:788] duration metric: took 6.413321ms waiting for restarted kubelet to initialise ...
	I0717 22:50:59.277083   54248 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:50:59.285338   54248 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:01.304495   54248 pod_ready.go:102] pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:02.787568   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:02.788090   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:02.788118   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:02.788031   55310 retry.go:31] will retry after 2.897894179s: waiting for machine to come up
	I0717 22:51:05.687387   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:05.687774   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:05.687816   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:05.687724   55310 retry.go:31] will retry after 3.029953032s: waiting for machine to come up
	I0717 22:51:02.822684   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.466424442s)
	I0717 22:51:02.822717   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0717 22:51:02.822741   54573 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0717 22:51:02.822790   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0717 22:51:03.306481   54248 pod_ready.go:102] pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:04.302530   54248 pod_ready.go:92] pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:04.302560   54248 pod_ready.go:81] duration metric: took 5.01718551s waiting for pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:04.302573   54248 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:06.320075   54248 pod_ready.go:102] pod "etcd-embed-certs-571296" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:08.719593   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:08.720084   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:08.720116   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:08.720015   55310 retry.go:31] will retry after 3.646843477s: waiting for machine to come up
	I0717 22:51:12.370696   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.371189   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Found IP for machine: 192.168.72.118
	I0717 22:51:12.371225   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has current primary IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.371237   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Reserving static IP address...
	I0717 22:51:12.371698   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-504828", mac: "52:54:00:28:6f:f7", ip: "192.168.72.118"} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.371729   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Reserved static IP address: 192.168.72.118
	I0717 22:51:12.371747   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | skip adding static IP to network mk-default-k8s-diff-port-504828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-504828", mac: "52:54:00:28:6f:f7", ip: "192.168.72.118"}
	I0717 22:51:12.371759   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for SSH to be available...
	I0717 22:51:12.371774   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Getting to WaitForSSH function...
	I0717 22:51:12.374416   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.374804   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.374839   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.374958   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Using SSH client type: external
	I0717 22:51:12.375000   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa (-rw-------)
	I0717 22:51:12.375056   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:51:12.375078   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | About to run SSH command:
	I0717 22:51:12.375103   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | exit 0
	I0717 22:51:12.461844   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | SSH cmd err, output: <nil>: 
	I0717 22:51:12.462190   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetConfigRaw
	I0717 22:51:12.462878   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:12.465698   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.466129   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.466171   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.466432   54649 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/config.json ...
	I0717 22:51:12.466686   54649 machine.go:88] provisioning docker machine ...
	I0717 22:51:12.466713   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:12.466932   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetMachineName
	I0717 22:51:12.467149   54649 buildroot.go:166] provisioning hostname "default-k8s-diff-port-504828"
	I0717 22:51:12.467174   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetMachineName
	I0717 22:51:12.467336   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.469892   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.470309   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.470347   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.470539   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:12.470711   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.470906   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.471075   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:12.471251   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:12.471709   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:12.471728   54649 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-504828 && echo "default-k8s-diff-port-504828" | sudo tee /etc/hostname
	I0717 22:51:10.226119   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.403300342s)
	I0717 22:51:10.226147   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0717 22:51:10.226176   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 22:51:10.226231   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 22:51:12.580664   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.354394197s)
	I0717 22:51:12.580698   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0717 22:51:12.580729   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 22:51:12.580786   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 22:51:08.320182   54248 pod_ready.go:92] pod "etcd-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:08.320212   54248 pod_ready.go:81] duration metric: took 4.017631268s waiting for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:08.320225   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:08.327865   54248 pod_ready.go:92] pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:08.327901   54248 pod_ready.go:81] duration metric: took 7.613771ms waiting for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:08.327916   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:10.343489   54248 pod_ready.go:102] pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:11.344309   54248 pod_ready.go:92] pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:11.344328   54248 pod_ready.go:81] duration metric: took 3.016404448s waiting for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.344338   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kpt5d" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.353150   54248 pod_ready.go:92] pod "kube-proxy-kpt5d" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:11.353174   54248 pod_ready.go:81] duration metric: took 8.829647ms waiting for pod "kube-proxy-kpt5d" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.353183   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.360223   54248 pod_ready.go:92] pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:11.360242   54248 pod_ready.go:81] duration metric: took 7.0537ms waiting for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.360251   54248 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:13.630627   53870 start.go:369] acquired machines lock for "old-k8s-version-332820" in 58.214644858s
	I0717 22:51:13.630698   53870 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:51:13.630705   53870 fix.go:54] fixHost starting: 
	I0717 22:51:13.631117   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:13.631153   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:13.651676   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38349
	I0717 22:51:13.652152   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:13.652820   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:51:13.652841   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:13.653180   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:13.653679   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:13.653832   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:51:13.656911   53870 fix.go:102] recreateIfNeeded on old-k8s-version-332820: state=Stopped err=<nil>
	I0717 22:51:13.656944   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	W0717 22:51:13.657151   53870 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:51:13.659194   53870 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-332820" ...
	I0717 22:51:12.607198   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504828
	
	I0717 22:51:12.607256   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.610564   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.611073   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.611139   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.611470   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:12.611707   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.611918   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.612080   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:12.612267   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:12.612863   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:12.612897   54649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-504828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-504828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-504828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:51:12.749133   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:51:12.749159   54649 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:51:12.749187   54649 buildroot.go:174] setting up certificates
	I0717 22:51:12.749198   54649 provision.go:83] configureAuth start
	I0717 22:51:12.749211   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetMachineName
	I0717 22:51:12.749475   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:12.752199   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.752608   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.752637   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.752753   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.754758   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.755095   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.755142   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.755255   54649 provision.go:138] copyHostCerts
	I0717 22:51:12.755313   54649 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:51:12.755328   54649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:51:12.755393   54649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:51:12.755503   54649 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:51:12.755516   54649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:51:12.755547   54649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:51:12.755615   54649 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:51:12.755626   54649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:51:12.755649   54649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:51:12.755708   54649 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-504828 san=[192.168.72.118 192.168.72.118 localhost 127.0.0.1 minikube default-k8s-diff-port-504828]
	I0717 22:51:12.865920   54649 provision.go:172] copyRemoteCerts
	I0717 22:51:12.865978   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:51:12.865998   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.868784   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.869162   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.869196   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.869354   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:12.869551   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.869731   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:12.869864   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:12.963734   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:51:12.988925   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 22:51:13.014007   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:51:13.037974   54649 provision.go:86] duration metric: configureAuth took 288.764872ms
	I0717 22:51:13.038002   54649 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:51:13.038226   54649 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:51:13.038298   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.041038   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.041510   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.041560   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.041722   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.041928   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.042115   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.042265   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.042462   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:13.042862   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:13.042883   54649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:51:13.359789   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:51:13.359856   54649 machine.go:91] provisioned docker machine in 893.152202ms
	I0717 22:51:13.359873   54649 start.go:300] post-start starting for "default-k8s-diff-port-504828" (driver="kvm2")
	I0717 22:51:13.359885   54649 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:51:13.359909   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.360286   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:51:13.360322   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.363265   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.363637   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.363668   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.363953   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.364165   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.364336   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.364484   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:13.456030   54649 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:51:13.460504   54649 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:51:13.460539   54649 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:51:13.460610   54649 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:51:13.460711   54649 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:51:13.460824   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:51:13.469442   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:13.497122   54649 start.go:303] post-start completed in 137.230872ms
	I0717 22:51:13.497150   54649 fix.go:56] fixHost completed within 19.269364226s
	I0717 22:51:13.497196   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.500248   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.500673   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.500721   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.500872   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.501093   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.501256   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.501434   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.501602   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:13.502063   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:13.502080   54649 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:51:13.630454   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634273.570672552
	
	I0717 22:51:13.630476   54649 fix.go:206] guest clock: 1689634273.570672552
	I0717 22:51:13.630486   54649 fix.go:219] Guest: 2023-07-17 22:51:13.570672552 +0000 UTC Remote: 2023-07-17 22:51:13.49715425 +0000 UTC m=+216.001835933 (delta=73.518302ms)
	I0717 22:51:13.630534   54649 fix.go:190] guest clock delta is within tolerance: 73.518302ms
	I0717 22:51:13.630541   54649 start.go:83] releasing machines lock for "default-k8s-diff-port-504828", held for 19.402800296s
	I0717 22:51:13.630571   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.630804   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:13.633831   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.634285   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.634329   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.634496   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.635108   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.635324   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.635440   54649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:51:13.635513   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.635563   54649 ssh_runner.go:195] Run: cat /version.json
	I0717 22:51:13.635590   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.638872   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639085   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639277   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.639313   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639513   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.639711   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.639730   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.639769   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639930   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.639966   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.640133   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:13.640149   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.640293   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.640432   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:13.732117   54649 ssh_runner.go:195] Run: systemctl --version
	I0717 22:51:13.762073   54649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:51:13.920611   54649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:51:13.927492   54649 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:51:13.927552   54649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:51:13.943359   54649 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:51:13.943384   54649 start.go:466] detecting cgroup driver to use...
	I0717 22:51:13.943456   54649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:51:13.959123   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:51:13.974812   54649 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:51:13.974875   54649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:51:13.991292   54649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:51:14.006999   54649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:51:14.116763   54649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:51:14.286675   54649 docker.go:212] disabling docker service ...
	I0717 22:51:14.286747   54649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:51:14.304879   54649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:51:14.319280   54649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:51:14.436994   54649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:51:14.551392   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:51:14.564944   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:51:14.588553   54649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:51:14.588618   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.602482   54649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:51:14.602561   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.613901   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.624520   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.634941   54649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:51:14.649124   54649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:51:14.659103   54649 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:51:14.659194   54649 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:51:14.673064   54649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:51:14.684547   54649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:51:14.796698   54649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:51:15.013266   54649 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:51:15.013352   54649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:51:15.019638   54649 start.go:534] Will wait 60s for crictl version
	I0717 22:51:15.019707   54649 ssh_runner.go:195] Run: which crictl
	I0717 22:51:15.023691   54649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:51:15.079550   54649 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:51:15.079642   54649 ssh_runner.go:195] Run: crio --version
	I0717 22:51:15.149137   54649 ssh_runner.go:195] Run: crio --version
	I0717 22:51:15.210171   54649 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:51:15.211641   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:15.214746   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:15.215160   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:15.215195   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:15.215444   54649 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 22:51:15.220209   54649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:15.233265   54649 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:51:15.233336   54649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:15.278849   54649 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:51:15.278928   54649 ssh_runner.go:195] Run: which lz4
	I0717 22:51:15.284618   54649 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:51:15.289979   54649 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:51:15.290021   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 22:51:17.240790   54649 crio.go:444] Took 1.956220 seconds to copy over tarball
	I0717 22:51:17.240850   54649 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:51:14.577167   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (1.996354374s)
	I0717 22:51:14.577200   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0717 22:51:14.577239   54573 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:51:14.577288   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:51:15.749388   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.172071962s)
	I0717 22:51:15.749419   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 22:51:15.749442   54573 cache_images.go:123] Successfully loaded all cached images
	I0717 22:51:15.749448   54573 cache_images.go:92] LoadImages completed in 19.962118423s
	I0717 22:51:15.749548   54573 ssh_runner.go:195] Run: crio config
	I0717 22:51:15.830341   54573 cni.go:84] Creating CNI manager for ""
	I0717 22:51:15.830380   54573 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:15.830394   54573 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:51:15.830416   54573 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-935524 NodeName:no-preload-935524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:51:15.830609   54573 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-935524"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:51:15.830710   54573 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-935524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-935524 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:51:15.830777   54573 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:51:15.844785   54573 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:51:15.844854   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:51:15.859135   54573 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0717 22:51:15.884350   54573 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:51:15.904410   54573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0717 22:51:15.930959   54573 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0717 22:51:15.937680   54573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:15.960124   54573 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524 for IP: 192.168.39.6
	I0717 22:51:15.960169   54573 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:15.960352   54573 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:51:15.960416   54573 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:51:15.960539   54573 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.key
	I0717 22:51:15.960635   54573 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/apiserver.key.cc3bd7a5
	I0717 22:51:15.960694   54573 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/proxy-client.key
	I0717 22:51:15.960842   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:51:15.960882   54573 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:51:15.960899   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:51:15.960936   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:51:15.960973   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:51:15.961001   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:51:15.961063   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:15.961864   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:51:16.000246   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:51:16.036739   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:51:16.073916   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:51:16.110871   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:51:16.147671   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:51:16.183503   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:51:16.216441   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:51:16.251053   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:51:16.291022   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:51:16.327764   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:51:16.360870   54573 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:51:16.399760   54573 ssh_runner.go:195] Run: openssl version
	I0717 22:51:16.407720   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:51:16.423038   54573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:16.430870   54573 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:16.430933   54573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:16.441206   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:51:16.455708   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:51:16.470036   54573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:51:16.477133   54573 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:51:16.477206   54573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:51:16.485309   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:51:16.503973   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:51:16.524430   54573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:51:16.533991   54573 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:51:16.534052   54573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:51:16.544688   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:51:16.563847   54573 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:51:16.572122   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:51:16.583217   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:51:16.594130   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:51:16.606268   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:51:16.618166   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:51:16.628424   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:51:16.636407   54573 kubeadm.go:404] StartCluster: {Name:no-preload-935524 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-935524 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:51:16.636531   54573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:51:16.636616   54573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:16.677023   54573 cri.go:89] found id: ""
	I0717 22:51:16.677096   54573 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:51:16.691214   54573 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:51:16.691243   54573 kubeadm.go:636] restartCluster start
	I0717 22:51:16.691309   54573 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:51:16.705358   54573 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:16.707061   54573 kubeconfig.go:92] found "no-preload-935524" server: "https://192.168.39.6:8443"
	I0717 22:51:16.710828   54573 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:51:16.722187   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:16.722262   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:16.739474   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:17.240340   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:17.240432   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:17.255528   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:13.660641   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Start
	I0717 22:51:13.660899   53870 main.go:141] libmachine: (old-k8s-version-332820) Ensuring networks are active...
	I0717 22:51:13.661724   53870 main.go:141] libmachine: (old-k8s-version-332820) Ensuring network default is active
	I0717 22:51:13.662114   53870 main.go:141] libmachine: (old-k8s-version-332820) Ensuring network mk-old-k8s-version-332820 is active
	I0717 22:51:13.662588   53870 main.go:141] libmachine: (old-k8s-version-332820) Getting domain xml...
	I0717 22:51:13.663907   53870 main.go:141] libmachine: (old-k8s-version-332820) Creating domain...
	I0717 22:51:14.067159   53870 main.go:141] libmachine: (old-k8s-version-332820) Waiting to get IP...
	I0717 22:51:14.067897   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.068328   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.068398   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.068321   55454 retry.go:31] will retry after 239.1687ms: waiting for machine to come up
	I0717 22:51:14.309022   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.309748   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.309782   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.309696   55454 retry.go:31] will retry after 256.356399ms: waiting for machine to come up
	I0717 22:51:14.568103   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.568537   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.568572   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.568490   55454 retry.go:31] will retry after 386.257739ms: waiting for machine to come up
	I0717 22:51:14.955922   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.956518   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.956548   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.956458   55454 retry.go:31] will retry after 410.490408ms: waiting for machine to come up
	I0717 22:51:15.368904   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:15.369672   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:15.369780   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:15.369722   55454 retry.go:31] will retry after 536.865068ms: waiting for machine to come up
	I0717 22:51:15.908301   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:15.908814   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:15.908851   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:15.908774   55454 retry.go:31] will retry after 863.22272ms: waiting for machine to come up
	I0717 22:51:16.773413   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:16.773936   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:16.773971   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:16.773877   55454 retry.go:31] will retry after 858.793193ms: waiting for machine to come up
	I0717 22:51:17.634087   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:17.634588   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:17.634613   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:17.634532   55454 retry.go:31] will retry after 1.416659037s: waiting for machine to come up
	I0717 22:51:13.375358   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:15.393985   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:17.887365   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:20.250749   54649 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009864781s)
	I0717 22:51:20.250783   54649 crio.go:451] Took 3.009971 seconds to extract the tarball
	I0717 22:51:20.250793   54649 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:51:20.291666   54649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:20.341098   54649 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:51:20.341126   54649 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:51:20.341196   54649 ssh_runner.go:195] Run: crio config
	I0717 22:51:20.415138   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:51:20.415161   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:20.415171   54649 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:51:20.415185   54649 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.118 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-504828 NodeName:default-k8s-diff-port-504828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:51:20.415352   54649 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.118
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-504828"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:51:20.415432   54649 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-504828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-504828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0717 22:51:20.415488   54649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:51:20.427702   54649 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:51:20.427758   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:51:20.436950   54649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0717 22:51:20.454346   54649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:51:20.470679   54649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0717 22:51:20.491725   54649 ssh_runner.go:195] Run: grep 192.168.72.118	control-plane.minikube.internal$ /etc/hosts
	I0717 22:51:20.495952   54649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:20.511714   54649 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828 for IP: 192.168.72.118
	I0717 22:51:20.511768   54649 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:20.511949   54649 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:51:20.511997   54649 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:51:20.512100   54649 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.key
	I0717 22:51:20.512210   54649 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/apiserver.key.f316a5ec
	I0717 22:51:20.512293   54649 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/proxy-client.key
	I0717 22:51:20.512432   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:51:20.512474   54649 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:51:20.512490   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:51:20.512526   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:51:20.512563   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:51:20.512597   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:51:20.512654   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:20.513217   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:51:20.543975   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:51:20.573149   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:51:20.603536   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:51:20.632387   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:51:20.658524   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:51:20.685636   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:51:20.715849   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:51:20.746544   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:51:20.773588   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:51:20.798921   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:51:20.826004   54649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:51:20.843941   54649 ssh_runner.go:195] Run: openssl version
	I0717 22:51:20.849904   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:51:20.860510   54649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:51:20.865435   54649 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:51:20.865499   54649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:51:20.872493   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:51:20.883044   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:51:20.893448   54649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:20.898872   54649 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:20.898937   54649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:20.905231   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:51:20.915267   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:51:20.925267   54649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:51:20.929988   54649 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:51:20.930055   54649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:51:20.935935   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:51:20.945567   54649 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:51:20.950083   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:51:20.956164   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:51:20.962921   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:51:20.969329   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:51:20.975672   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:51:20.981532   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:51:20.987431   54649 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-504828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port
-504828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:51:20.987551   54649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:51:20.987640   54649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:21.020184   54649 cri.go:89] found id: ""
	I0717 22:51:21.020272   54649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:51:21.030407   54649 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:51:21.030426   54649 kubeadm.go:636] restartCluster start
	I0717 22:51:21.030484   54649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:51:21.039171   54649 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.040133   54649 kubeconfig.go:92] found "default-k8s-diff-port-504828" server: "https://192.168.72.118:8444"
	I0717 22:51:21.043010   54649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:51:21.052032   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.052083   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.063718   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.564403   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.564474   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.576250   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:22.063846   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.063915   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.077908   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:17.739595   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:17.739675   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:17.754882   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:18.240006   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:18.240109   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:18.253391   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:18.739658   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:18.739750   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:18.751666   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:19.240285   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:19.240385   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:19.254816   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:19.740338   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:19.740430   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:19.757899   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:20.240481   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:20.240561   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:20.255605   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:20.739950   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:20.740064   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:20.754552   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.240009   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.240088   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.252127   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.739671   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.739761   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.751590   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:22.239795   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.239895   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.255489   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:19.053039   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:19.053552   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:19.053577   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:19.053545   55454 retry.go:31] will retry after 1.844468395s: waiting for machine to come up
	I0717 22:51:20.899373   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:20.899955   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:20.899985   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:20.899907   55454 retry.go:31] will retry after 1.689590414s: waiting for machine to come up
	I0717 22:51:22.590651   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:22.591178   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:22.591210   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:22.591133   55454 retry.go:31] will retry after 2.006187847s: waiting for machine to come up
	I0717 22:51:20.375100   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:22.375448   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:22.564646   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.564758   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.578416   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.063819   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.063917   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.076239   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.563771   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.563906   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.577184   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.064855   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.064943   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.080926   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.563906   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.564002   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.580421   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.063993   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.064078   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.076570   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.563894   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.563978   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.575475   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.063959   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:26.064042   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:26.075498   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.564007   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:26.564068   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:26.576760   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:27.064334   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:27.064437   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:27.076567   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:22.739773   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.739859   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.752462   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.240402   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.240481   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.255896   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.740550   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.740740   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.756364   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.239721   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.239803   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.251755   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.740355   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.740455   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.751880   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.240545   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.240637   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.252165   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.739649   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.739729   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.751302   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.239861   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:26.239951   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:26.251854   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.722721   54573 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:51:26.722761   54573 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:51:26.722774   54573 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:51:26.722824   54573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:26.754496   54573 cri.go:89] found id: ""
	I0717 22:51:26.754575   54573 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:51:26.769858   54573 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:51:26.778403   54573 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:51:26.778456   54573 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:26.788782   54573 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:26.788809   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:26.926114   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:24.598549   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:24.599047   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:24.599078   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:24.598993   55454 retry.go:31] will retry after 2.77055632s: waiting for machine to come up
	I0717 22:51:27.371775   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:27.372248   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:27.372282   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:27.372196   55454 retry.go:31] will retry after 3.942088727s: waiting for machine to come up
	I0717 22:51:24.876056   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:26.876873   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:27.564363   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:27.564459   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:27.578222   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:28.063778   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:28.063883   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:28.075427   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:28.564630   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:28.564717   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:28.576903   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:29.064502   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:29.064605   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:29.075995   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:29.564295   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:29.564378   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:29.576762   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:30.063786   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:30.063870   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:30.079670   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:30.564137   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:30.564246   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:30.579055   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:31.052972   54649 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:51:31.053010   54649 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:51:31.053022   54649 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:51:31.053071   54649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:31.087580   54649 cri.go:89] found id: ""
	I0717 22:51:31.087681   54649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:51:31.103788   54649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:51:31.113570   54649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:51:31.113630   54649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:31.122993   54649 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:31.123016   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:31.254859   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:32.122277   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:32.360183   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:32.499924   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.181412   54573 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.255240525s)
	I0717 22:51:28.181446   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.398026   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.491028   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.586346   54573 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:51:28.586450   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:29.099979   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:29.599755   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:30.100095   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:30.600338   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:31.100205   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:31.129978   54573 api_server.go:72] duration metric: took 2.543631809s to wait for apiserver process to appear ...
	I0717 22:51:31.130004   54573 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:51:31.130020   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:31.316328   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.316892   53870 main.go:141] libmachine: (old-k8s-version-332820) Found IP for machine: 192.168.50.149
	I0717 22:51:31.316924   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has current primary IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.316936   53870 main.go:141] libmachine: (old-k8s-version-332820) Reserving static IP address...
	I0717 22:51:31.317425   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "old-k8s-version-332820", mac: "52:54:00:46:ca:1a", ip: "192.168.50.149"} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.317463   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | skip adding static IP to network mk-old-k8s-version-332820 - found existing host DHCP lease matching {name: "old-k8s-version-332820", mac: "52:54:00:46:ca:1a", ip: "192.168.50.149"}
	I0717 22:51:31.317486   53870 main.go:141] libmachine: (old-k8s-version-332820) Reserved static IP address: 192.168.50.149
	I0717 22:51:31.317503   53870 main.go:141] libmachine: (old-k8s-version-332820) Waiting for SSH to be available...
	I0717 22:51:31.317531   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Getting to WaitForSSH function...
	I0717 22:51:31.320209   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.320558   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.320593   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.320779   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Using SSH client type: external
	I0717 22:51:31.320810   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa (-rw-------)
	I0717 22:51:31.320862   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:51:31.320881   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | About to run SSH command:
	I0717 22:51:31.320895   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | exit 0
	I0717 22:51:31.426263   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | SSH cmd err, output: <nil>: 
	I0717 22:51:31.426659   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetConfigRaw
	I0717 22:51:31.427329   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:31.430330   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.430697   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.430739   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.431053   53870 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/config.json ...
	I0717 22:51:31.431288   53870 machine.go:88] provisioning docker machine ...
	I0717 22:51:31.431312   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:31.431531   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetMachineName
	I0717 22:51:31.431711   53870 buildroot.go:166] provisioning hostname "old-k8s-version-332820"
	I0717 22:51:31.431736   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetMachineName
	I0717 22:51:31.431959   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.434616   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.435073   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.435105   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.435246   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:31.435429   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.435578   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.435720   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:31.435889   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:31.436476   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:31.436499   53870 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-332820 && echo "old-k8s-version-332820" | sudo tee /etc/hostname
	I0717 22:51:31.589302   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-332820
	
	I0717 22:51:31.589343   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.592724   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.593180   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.593236   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.593559   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:31.593754   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.593922   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.594077   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:31.594266   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:31.594671   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:31.594696   53870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-332820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-332820/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-332820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:51:31.746218   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:51:31.746250   53870 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:51:31.746274   53870 buildroot.go:174] setting up certificates
	I0717 22:51:31.746298   53870 provision.go:83] configureAuth start
	I0717 22:51:31.746316   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetMachineName
	I0717 22:51:31.746626   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:31.750130   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.750678   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.750724   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.750781   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.753170   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.753495   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.753552   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.753654   53870 provision.go:138] copyHostCerts
	I0717 22:51:31.753715   53870 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:51:31.753728   53870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:51:31.753804   53870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:51:31.753944   53870 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:51:31.753957   53870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:51:31.753989   53870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:51:31.754072   53870 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:51:31.754085   53870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:51:31.754113   53870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:51:31.754184   53870 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-332820 san=[192.168.50.149 192.168.50.149 localhost 127.0.0.1 minikube old-k8s-version-332820]
	I0717 22:51:31.847147   53870 provision.go:172] copyRemoteCerts
	I0717 22:51:31.847203   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:51:31.847225   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.850322   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.850753   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.850810   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.851095   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:31.851414   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.851605   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:31.851784   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:31.951319   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:51:31.980515   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:51:32.010536   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 22:51:32.037399   53870 provision.go:86] duration metric: configureAuth took 291.082125ms
	I0717 22:51:32.037434   53870 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:51:32.037660   53870 config.go:182] Loaded profile config "old-k8s-version-332820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 22:51:32.037735   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.040863   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.041427   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.041534   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.041625   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.041848   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.042053   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.042225   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.042394   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:32.042812   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:32.042834   53870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:51:32.425577   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:51:32.425603   53870 machine.go:91] provisioned docker machine in 994.299178ms
	I0717 22:51:32.425615   53870 start.go:300] post-start starting for "old-k8s-version-332820" (driver="kvm2")
	I0717 22:51:32.425627   53870 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:51:32.425662   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.426023   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:51:32.426060   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.429590   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.430060   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.430087   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.430464   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.430677   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.430839   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.430955   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:32.535625   53870 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:51:32.541510   53870 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:51:32.541569   53870 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:51:32.541660   53870 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:51:32.541771   53870 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:51:32.541919   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:51:32.554113   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:32.579574   53870 start.go:303] post-start completed in 153.943669ms
	I0717 22:51:32.579597   53870 fix.go:56] fixHost completed within 18.948892402s
	I0717 22:51:32.579620   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.582411   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.582774   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.582807   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.582939   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.583181   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.583404   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.583562   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.583804   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:32.584270   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:32.584287   53870 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:51:32.727134   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634292.668672695
	
	I0717 22:51:32.727160   53870 fix.go:206] guest clock: 1689634292.668672695
	I0717 22:51:32.727171   53870 fix.go:219] Guest: 2023-07-17 22:51:32.668672695 +0000 UTC Remote: 2023-07-17 22:51:32.579600815 +0000 UTC m=+359.756107714 (delta=89.07188ms)
	I0717 22:51:32.727195   53870 fix.go:190] guest clock delta is within tolerance: 89.07188ms
	I0717 22:51:32.727201   53870 start.go:83] releasing machines lock for "old-k8s-version-332820", held for 19.096529597s
	I0717 22:51:32.727223   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.727539   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:32.730521   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.730926   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.730958   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.731115   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.731706   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.731881   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.731968   53870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:51:32.732018   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.732115   53870 ssh_runner.go:195] Run: cat /version.json
	I0717 22:51:32.732141   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.734864   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735214   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.735264   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735284   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735387   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.735561   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.735821   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.735832   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.735852   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735958   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:32.736097   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.736224   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.736329   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.736435   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:32.854136   53870 ssh_runner.go:195] Run: systemctl --version
	I0717 22:51:29.375082   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:31.376747   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:32.860997   53870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:51:33.025325   53870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:51:33.031587   53870 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:51:33.031662   53870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:51:33.046431   53870 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:51:33.046454   53870 start.go:466] detecting cgroup driver to use...
	I0717 22:51:33.046520   53870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:51:33.067265   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:51:33.079490   53870 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:51:33.079543   53870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:51:33.093639   53870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:51:33.106664   53870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:51:33.248823   53870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:51:33.414350   53870 docker.go:212] disabling docker service ...
	I0717 22:51:33.414420   53870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:51:33.428674   53870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:51:33.442140   53870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:51:33.564890   53870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:51:33.699890   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:51:33.714011   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:51:33.733726   53870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0717 22:51:33.733825   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.746603   53870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:51:33.746676   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.759291   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.772841   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.785507   53870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:51:33.798349   53870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:51:33.807468   53870 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:51:33.807578   53870 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:51:33.822587   53870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:51:33.832542   53870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:51:33.975008   53870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:51:34.192967   53870 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:51:34.193041   53870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:51:34.200128   53870 start.go:534] Will wait 60s for crictl version
	I0717 22:51:34.200194   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:34.204913   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:51:34.243900   53870 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:51:34.244054   53870 ssh_runner.go:195] Run: crio --version
	I0717 22:51:34.300151   53870 ssh_runner.go:195] Run: crio --version
	I0717 22:51:34.365344   53870 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0717 22:51:35.258235   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:51:35.258266   54573 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:51:35.758740   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:35.767634   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:35.767669   54573 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:36.259368   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:36.269761   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:36.269804   54573 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:36.759179   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:36.767717   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0717 22:51:36.783171   54573 api_server.go:141] control plane version: v1.27.3
	I0717 22:51:36.783277   54573 api_server.go:131] duration metric: took 5.653264463s to wait for apiserver health ...
	I0717 22:51:36.783299   54573 cni.go:84] Creating CNI manager for ""
	I0717 22:51:36.783320   54573 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:36.785787   54573 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:51:32.594699   54649 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:51:32.594791   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:33.112226   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:33.611860   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:34.112071   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:34.611354   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:35.111291   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:35.611869   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:35.637583   54649 api_server.go:72] duration metric: took 3.042882856s to wait for apiserver process to appear ...
	I0717 22:51:35.637607   54649 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:51:35.637624   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:36.787709   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:51:36.808980   54573 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:51:36.862525   54573 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:51:36.878653   54573 system_pods.go:59] 8 kube-system pods found
	I0717 22:51:36.878761   54573 system_pods.go:61] "coredns-5d78c9869d-2mpst" [7516b57f-a4cb-4e2f-995e-8e063bed22ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:51:36.878788   54573 system_pods.go:61] "etcd-no-preload-935524" [b663c4f9-d98e-457d-b511-435bea5e9525] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:51:36.878827   54573 system_pods.go:61] "kube-apiserver-no-preload-935524" [fb6f55fc-8705-46aa-a23c-e7870e52e542] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:51:36.878852   54573 system_pods.go:61] "kube-controller-manager-no-preload-935524" [37d43d22-e857-4a9b-b0cb-a9fc39931baa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:51:36.878874   54573 system_pods.go:61] "kube-proxy-qhp66" [8bc95955-b7ba-41e3-ac67-604a9695f784] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:51:36.878913   54573 system_pods.go:61] "kube-scheduler-no-preload-935524" [86fef4e0-b156-421a-8d53-0d34cae2cdb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:51:36.878940   54573 system_pods.go:61] "metrics-server-74d5c6b9c-tlbpl" [7c478efe-4435-45dd-a688-745872fc2918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:51:36.878959   54573 system_pods.go:61] "storage-provisioner" [85812d54-7a57-430b-991e-e301f123a86a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 22:51:36.878991   54573 system_pods.go:74] duration metric: took 16.439496ms to wait for pod list to return data ...
	I0717 22:51:36.879014   54573 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:51:36.886556   54573 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:51:36.886669   54573 node_conditions.go:123] node cpu capacity is 2
	I0717 22:51:36.886694   54573 node_conditions.go:105] duration metric: took 7.665172ms to run NodePressure ...
	I0717 22:51:36.886743   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:37.408758   54573 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:51:37.426705   54573 kubeadm.go:787] kubelet initialised
	I0717 22:51:37.426750   54573 kubeadm.go:788] duration metric: took 17.898411ms waiting for restarted kubelet to initialise ...
	I0717 22:51:37.426760   54573 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:37.442893   54573 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.449989   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.450020   54573 pod_ready.go:81] duration metric: took 7.096248ms waiting for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.450032   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.450043   54573 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.460343   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "etcd-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.460423   54573 pod_ready.go:81] duration metric: took 10.370601ms waiting for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.460468   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "etcd-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.460481   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.475124   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-apiserver-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.475203   54573 pod_ready.go:81] duration metric: took 14.713192ms waiting for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.475224   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-apiserver-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.475242   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.486443   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.486529   54573 pod_ready.go:81] duration metric: took 11.253247ms waiting for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.486551   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.486570   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:34.367014   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:34.370717   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:34.371243   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:34.371272   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:34.371626   53870 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 22:51:34.380223   53870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:34.395496   53870 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 22:51:34.395564   53870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:34.440412   53870 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 22:51:34.440486   53870 ssh_runner.go:195] Run: which lz4
	I0717 22:51:34.445702   53870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:51:34.451213   53870 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:51:34.451259   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0717 22:51:36.330808   53870 crio.go:444] Took 1.885143 seconds to copy over tarball
	I0717 22:51:36.330866   53870 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:51:33.377108   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:35.379770   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:37.382141   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:37.819308   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-proxy-qhp66" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.819393   54573 pod_ready.go:81] duration metric: took 332.789076ms waiting for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.819414   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-proxy-qhp66" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.819430   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:38.213914   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-scheduler-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.213947   54573 pod_ready.go:81] duration metric: took 394.500573ms waiting for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:38.213957   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-scheduler-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.213967   54573 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:38.617826   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.617855   54573 pod_ready.go:81] duration metric: took 403.88033ms waiting for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:38.617867   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.617878   54573 pod_ready.go:38] duration metric: took 1.191105641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:38.617907   54573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:51:38.634486   54573 ops.go:34] apiserver oom_adj: -16
	I0717 22:51:38.634511   54573 kubeadm.go:640] restartCluster took 21.94326064s
	I0717 22:51:38.634520   54573 kubeadm.go:406] StartCluster complete in 21.998122781s
	I0717 22:51:38.634560   54573 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:38.634648   54573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:51:38.637414   54573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:38.637733   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:51:38.637868   54573 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:51:38.637955   54573 addons.go:69] Setting storage-provisioner=true in profile "no-preload-935524"
	I0717 22:51:38.637972   54573 addons.go:231] Setting addon storage-provisioner=true in "no-preload-935524"
	W0717 22:51:38.637986   54573 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:51:38.638036   54573 host.go:66] Checking if "no-preload-935524" exists ...
	I0717 22:51:38.638418   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.638441   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.638510   54573 addons.go:69] Setting default-storageclass=true in profile "no-preload-935524"
	I0717 22:51:38.638530   54573 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-935524"
	I0717 22:51:38.638684   54573 addons.go:69] Setting metrics-server=true in profile "no-preload-935524"
	I0717 22:51:38.638700   54573 addons.go:231] Setting addon metrics-server=true in "no-preload-935524"
	W0717 22:51:38.638707   54573 addons.go:240] addon metrics-server should already be in state true
	I0717 22:51:38.638751   54573 host.go:66] Checking if "no-preload-935524" exists ...
	I0717 22:51:38.638977   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.639016   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.639083   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.639106   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.644028   54573 config.go:182] Loaded profile config "no-preload-935524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:51:38.656131   54573 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-935524" context rescaled to 1 replicas
	I0717 22:51:38.656182   54573 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:51:38.658128   54573 out.go:177] * Verifying Kubernetes components...
	I0717 22:51:38.659350   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I0717 22:51:38.662767   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:51:38.660678   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46603
	I0717 22:51:38.663403   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.664191   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.664207   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.664296   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46321
	I0717 22:51:38.664660   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.664872   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.665287   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.665301   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.665363   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.666826   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.667345   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.667411   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.667432   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.667875   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.667888   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.669299   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.669907   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.669941   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.689870   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0717 22:51:38.690029   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43747
	I0717 22:51:38.690596   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.691039   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.691052   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.691354   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.691782   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.691932   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.691942   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.692153   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.692209   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.692391   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.693179   54573 addons.go:231] Setting addon default-storageclass=true in "no-preload-935524"
	W0717 22:51:38.693197   54573 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:51:38.693226   54573 host.go:66] Checking if "no-preload-935524" exists ...
	I0717 22:51:38.693599   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.693627   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.695740   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:51:38.698283   54573 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:51:38.696822   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:51:38.700282   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:51:38.700294   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:51:38.700313   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:51:38.702588   54573 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:38.704435   54573 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:51:38.704453   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:51:38.704470   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:51:38.704034   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.704509   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:51:38.704545   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.705314   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:51:38.705704   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:51:38.705962   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:51:38.706101   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:51:38.707998   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.708366   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:51:38.708391   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.708663   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:51:38.708827   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:51:38.708935   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:51:38.709039   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:51:38.715303   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0717 22:51:38.715765   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.716225   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.716238   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.716515   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.716900   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.716915   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.775381   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0717 22:51:38.781850   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.782856   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.782886   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.783335   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.783547   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.786539   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:51:38.786818   54573 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:51:38.786841   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:51:38.786860   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:51:38.789639   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.793649   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:51:38.793678   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:51:38.793701   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.793926   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:51:38.794106   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:51:38.794262   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:51:38.862651   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:51:38.862675   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:51:38.914260   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:51:38.914294   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:51:38.933208   54573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:51:38.959784   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:51:38.959817   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:51:38.977205   54573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:51:39.028067   54573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:51:39.145640   54573 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 22:51:39.145688   54573 node_ready.go:35] waiting up to 6m0s for node "no-preload-935524" to be "Ready" ...
	I0717 22:51:40.593928   54573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.616678929s)
	I0717 22:51:40.593974   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.593987   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.594018   54573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.660755961s)
	I0717 22:51:40.594062   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.594078   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.594360   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.594377   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.594388   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.594397   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.596155   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596173   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.596184   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.596201   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.596345   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.596378   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596393   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.596406   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.596415   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.596536   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.596579   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596597   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.596672   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.596706   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596716   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.766149   54573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.73803779s)
	I0717 22:51:40.766218   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.766233   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.766573   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.766619   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.766629   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.766639   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.766648   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.766954   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.766987   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.766996   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.767004   54573 addons.go:467] Verifying addon metrics-server=true in "no-preload-935524"
	I0717 22:51:40.921642   54573 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 22:51:40.099354   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:51:40.099395   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:51:40.600101   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:40.606334   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:40.606375   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:41.100086   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:41.110410   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:41.110443   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:41.599684   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:41.615650   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:41.615693   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:42.100229   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:42.109347   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:42.109400   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:42.600180   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:42.607799   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 200:
	ok
	I0717 22:51:42.621454   54649 api_server.go:141] control plane version: v1.27.3
	I0717 22:51:42.621480   54649 api_server.go:131] duration metric: took 6.983866635s to wait for apiserver health ...
	I0717 22:51:42.621491   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:51:42.621503   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:42.623222   54649 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:51:41.140227   54573 addons.go:502] enable addons completed in 2.502347716s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 22:51:41.154857   54573 node_ready.go:58] node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:40.037161   53870 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.706262393s)
	I0717 22:51:40.037203   53870 crio.go:451] Took 3.706370 seconds to extract the tarball
	I0717 22:51:40.037215   53870 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:51:40.089356   53870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:40.143494   53870 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 22:51:40.143520   53870 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 22:51:40.143582   53870 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:40.143803   53870 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 22:51:40.143819   53870 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.143889   53870 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.143972   53870 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.143979   53870 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.144036   53870 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.144084   53870 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.151367   53870 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:40.151467   53870 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 22:51:40.152588   53870 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.152741   53870 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.152887   53870 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.152985   53870 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.153357   53870 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.153384   53870 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.317883   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.322240   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.325883   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 22:51:40.325883   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.326725   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.328193   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.356171   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.485259   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:40.493227   53870 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 22:51:40.493266   53870 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.493304   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.514366   53870 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 22:51:40.514409   53870 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.514459   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578201   53870 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 22:51:40.578304   53870 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.578312   53870 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 22:51:40.578342   53870 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.578363   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578396   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578451   53870 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 22:51:40.578485   53870 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.578534   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578248   53870 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 22:51:40.578638   53870 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0717 22:51:40.578247   53870 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 22:51:40.578717   53870 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.578756   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578688   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.717404   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.717482   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.717627   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.717740   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.717814   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0717 22:51:40.717918   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.718015   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.856246   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 22:51:40.856291   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 22:51:40.856403   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 22:51:40.856438   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 22:51:40.856526   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 22:51:40.856575   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 22:51:40.856604   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 22:51:40.856653   53870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0717 22:51:40.861702   53870 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0717 22:51:40.861718   53870 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0717 22:51:40.861766   53870 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0717 22:51:42.019439   53870 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.157649631s)
	I0717 22:51:42.019471   53870 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0717 22:51:42.019512   53870 cache_images.go:92] LoadImages completed in 1.875976905s
	W0717 22:51:42.019588   53870 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0717 22:51:42.019667   53870 ssh_runner.go:195] Run: crio config
	I0717 22:51:42.084276   53870 cni.go:84] Creating CNI manager for ""
	I0717 22:51:42.084310   53870 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:42.084329   53870 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:51:42.084352   53870 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.149 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-332820 NodeName:old-k8s-version-332820 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 22:51:42.084534   53870 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-332820"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-332820
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.149:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:51:42.084631   53870 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-332820 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-332820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:51:42.084705   53870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 22:51:42.095493   53870 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:51:42.095576   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:51:42.106777   53870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 22:51:42.126860   53870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:51:42.146610   53870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0717 22:51:42.167959   53870 ssh_runner.go:195] Run: grep 192.168.50.149	control-plane.minikube.internal$ /etc/hosts
	I0717 22:51:42.171993   53870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:42.188635   53870 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820 for IP: 192.168.50.149
	I0717 22:51:42.188673   53870 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:42.188887   53870 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:51:42.188945   53870 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:51:42.189042   53870 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.key
	I0717 22:51:42.189125   53870 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/apiserver.key.7e281e16
	I0717 22:51:42.189177   53870 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/proxy-client.key
	I0717 22:51:42.189322   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:51:42.189362   53870 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:51:42.189377   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:51:42.189413   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:51:42.189456   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:51:42.189502   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:51:42.189590   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:42.190495   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:51:42.219201   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 22:51:42.248355   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:51:42.275885   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 22:51:42.303987   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:51:42.329331   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:51:42.354424   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:51:42.386422   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:51:42.418872   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:51:42.448869   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:51:42.473306   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:51:42.499302   53870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:51:42.519833   53870 ssh_runner.go:195] Run: openssl version
	I0717 22:51:42.525933   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:51:42.537165   53870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:51:42.545354   53870 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:51:42.545419   53870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:51:42.551786   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:51:42.561900   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:51:42.571880   53870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:42.576953   53870 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:42.577017   53870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:42.583311   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:51:42.593618   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:51:42.604326   53870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:51:42.610022   53870 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:51:42.610084   53870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:51:42.615999   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:51:42.627353   53870 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:51:42.632186   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:51:42.638738   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:51:42.645118   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:51:42.651619   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:51:42.658542   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:51:42.665449   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:51:42.673656   53870 kubeadm.go:404] StartCluster: {Name:old-k8s-version-332820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-332820 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.149 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:51:42.673776   53870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:51:42.673832   53870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:42.718032   53870 cri.go:89] found id: ""
	I0717 22:51:42.718127   53870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:51:42.731832   53870 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:51:42.731856   53870 kubeadm.go:636] restartCluster start
	I0717 22:51:42.731907   53870 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:51:42.741531   53870 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:42.743035   53870 kubeconfig.go:92] found "old-k8s-version-332820" server: "https://192.168.50.149:8443"
	I0717 22:51:42.746440   53870 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:51:42.755816   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:42.755878   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:42.768767   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:39.384892   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:41.876361   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:42.624643   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:51:42.660905   54649 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:51:42.733831   54649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:51:42.761055   54649 system_pods.go:59] 8 kube-system pods found
	I0717 22:51:42.761093   54649 system_pods.go:61] "coredns-5d78c9869d-wpmhl" [ebfdf1a8-16b1-4e11-8bda-0b6afa127ed2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:51:42.761113   54649 system_pods.go:61] "etcd-default-k8s-diff-port-504828" [47338c6f-2509-4051-acaa-7281bbafe376] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:51:42.761125   54649 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504828" [16961d82-f852-4c99-81af-a5b6290222d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:51:42.761138   54649 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504828" [9e226305-9f41-4e56-8f8d-a250f46ab852] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:51:42.761165   54649 system_pods.go:61] "kube-proxy-kbp9x" [5a581d9c-4efa-49b7-8bd9-b877d5d12871] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:51:42.761183   54649 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504828" [0d63a508-5b2b-4b61-b087-afdd063afbfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:51:42.761197   54649 system_pods.go:61] "metrics-server-74d5c6b9c-tj4st" [2cd90033-b07a-4458-8dac-5a618d4ed7ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:51:42.761207   54649 system_pods.go:61] "storage-provisioner" [c306122c-f32a-4455-a825-3e272a114ddc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 22:51:42.761217   54649 system_pods.go:74] duration metric: took 27.36753ms to wait for pod list to return data ...
	I0717 22:51:42.761226   54649 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:51:42.766615   54649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:51:42.766640   54649 node_conditions.go:123] node cpu capacity is 2
	I0717 22:51:42.766651   54649 node_conditions.go:105] duration metric: took 5.41582ms to run NodePressure ...
	I0717 22:51:42.766666   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:43.144614   54649 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:51:43.151192   54649 kubeadm.go:787] kubelet initialised
	I0717 22:51:43.151229   54649 kubeadm.go:788] duration metric: took 6.579448ms waiting for restarted kubelet to initialise ...
	I0717 22:51:43.151245   54649 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:43.157867   54649 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:45.174145   54649 pod_ready.go:102] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:47.177320   54649 pod_ready.go:102] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:43.656678   54573 node_ready.go:58] node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:46.154037   54573 node_ready.go:49] node "no-preload-935524" has status "Ready":"True"
	I0717 22:51:46.154060   54573 node_ready.go:38] duration metric: took 7.008304923s waiting for node "no-preload-935524" to be "Ready" ...
	I0717 22:51:46.154068   54573 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:46.161581   54573 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:46.167554   54573 pod_ready.go:92] pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:46.167581   54573 pod_ready.go:81] duration metric: took 5.973951ms waiting for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:46.167593   54573 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:43.269246   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:43.269363   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:43.281553   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:43.769539   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:43.769648   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:43.784373   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:44.268932   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:44.269030   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:44.280678   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:44.769180   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:44.769268   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:44.782107   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:45.269718   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:45.269795   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:45.282616   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:45.768937   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:45.769014   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:45.782121   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:46.269531   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:46.269628   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:46.281901   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:46.769344   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:46.769437   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:46.784477   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:47.268980   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:47.269070   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:47.280858   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:47.769478   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:47.769577   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:47.783095   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:44.373907   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:46.375240   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:49.671705   54649 pod_ready.go:102] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:50.172053   54649 pod_ready.go:92] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:50.172081   54649 pod_ready.go:81] duration metric: took 7.014190645s waiting for pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:50.172094   54649 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:52.186327   54649 pod_ready.go:102] pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:48.180621   54573 pod_ready.go:92] pod "etcd-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.180653   54573 pod_ready.go:81] duration metric: took 2.0130508s waiting for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.180666   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.185965   54573 pod_ready.go:92] pod "kube-apiserver-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.185985   54573 pod_ready.go:81] duration metric: took 5.310471ms waiting for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.185996   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.191314   54573 pod_ready.go:92] pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.191335   54573 pod_ready.go:81] duration metric: took 5.331248ms waiting for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.191346   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.197557   54573 pod_ready.go:92] pod "kube-proxy-qhp66" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.197576   54573 pod_ready.go:81] duration metric: took 6.222911ms waiting for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.197586   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:50.567470   54573 pod_ready.go:92] pod "kube-scheduler-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:50.567494   54573 pod_ready.go:81] duration metric: took 2.369900836s waiting for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:50.567504   54573 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:52.582697   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:48.269386   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:48.269464   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:48.281178   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:48.769171   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:48.769255   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:48.781163   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:49.269813   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:49.269890   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:49.282099   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:49.769555   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:49.769659   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:49.782298   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:50.269111   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:50.269176   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:50.280805   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:50.769333   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:50.769438   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:50.781760   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:51.269299   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:51.269368   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:51.281559   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:51.769032   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:51.769096   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:51.780505   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:52.269033   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:52.269134   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:52.281362   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:52.755841   53870 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:51:52.755871   53870 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:51:52.755882   53870 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:51:52.755945   53870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:52.789292   53870 cri.go:89] found id: ""
	I0717 22:51:52.789370   53870 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:51:52.805317   53870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:51:52.814714   53870 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:51:52.814778   53870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:52.824024   53870 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:52.824045   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:48.376709   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:50.877922   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:54.187055   54649 pod_ready.go:92] pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.187076   54649 pod_ready.go:81] duration metric: took 4.01497478s waiting for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.187084   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.195396   54649 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.195426   54649 pod_ready.go:81] duration metric: took 8.33448ms waiting for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.195440   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.205666   54649 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.205694   54649 pod_ready.go:81] duration metric: took 10.243213ms waiting for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.205713   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kbp9x" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.217007   54649 pod_ready.go:92] pod "kube-proxy-kbp9x" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.217030   54649 pod_ready.go:81] duration metric: took 11.309771ms waiting for pod "kube-proxy-kbp9x" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.217041   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.225509   54649 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.225558   54649 pod_ready.go:81] duration metric: took 8.507279ms waiting for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.225572   54649 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:56.592993   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:54.582860   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:56.583634   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:52.949663   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:53.985430   53870 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.035733754s)
	I0717 22:51:53.985459   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:54.222833   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:54.357196   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:54.468442   53870 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:51:54.468516   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:54.999095   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:55.499700   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:55.999447   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:56.051829   53870 api_server.go:72] duration metric: took 1.583387644s to wait for apiserver process to appear ...
	I0717 22:51:56.051856   53870 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:51:56.051872   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:51:53.374486   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:55.375033   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:57.376561   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:59.093181   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:01.592585   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:59.084169   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:01.583540   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:01.053643   53870 api_server.go:269] stopped: https://192.168.50.149:8443/healthz: Get "https://192.168.50.149:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 22:52:01.554418   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:01.627371   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:52:01.627400   53870 api_server.go:103] status: https://192.168.50.149:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:52:02.054761   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:02.060403   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 22:52:02.060431   53870 api_server.go:103] status: https://192.168.50.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 22:52:02.554085   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:02.561664   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 22:52:02.561699   53870 api_server.go:103] status: https://192.168.50.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 22:51:59.876307   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:02.374698   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:03.054028   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:03.061055   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 200:
	ok
	I0717 22:52:03.069434   53870 api_server.go:141] control plane version: v1.16.0
	I0717 22:52:03.069465   53870 api_server.go:131] duration metric: took 7.017602055s to wait for apiserver health ...
	I0717 22:52:03.069475   53870 cni.go:84] Creating CNI manager for ""
	I0717 22:52:03.069485   53870 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:52:03.071306   53870 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:52:04.092490   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:06.592435   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:04.082787   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:06.089097   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:03.073009   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:52:03.085399   53870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:52:03.106415   53870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:52:03.117136   53870 system_pods.go:59] 7 kube-system pods found
	I0717 22:52:03.117181   53870 system_pods.go:61] "coredns-5644d7b6d9-s9vtg" [7a1ccabb-ad03-47ef-804a-eff0b00ea65c] Running
	I0717 22:52:03.117191   53870 system_pods.go:61] "etcd-old-k8s-version-332820" [a1c2ef8d-fdb3-4394-944b-042870d25c4b] Running
	I0717 22:52:03.117198   53870 system_pods.go:61] "kube-apiserver-old-k8s-version-332820" [39a09f85-abd5-442a-887d-c04a91b87258] Running
	I0717 22:52:03.117206   53870 system_pods.go:61] "kube-controller-manager-old-k8s-version-332820" [94c599c4-d22c-4b5e-bf7b-ce0b81e21283] Running
	I0717 22:52:03.117212   53870 system_pods.go:61] "kube-proxy-vkjpn" [8fe8844c-f199-4bcb-b6a0-c6023c06ef75] Running
	I0717 22:52:03.117219   53870 system_pods.go:61] "kube-scheduler-old-k8s-version-332820" [a2102927-3de6-45d8-a37e-665adde8ca47] Running
	I0717 22:52:03.117227   53870 system_pods.go:61] "storage-provisioner" [b9bcb25d-294e-49ae-8650-98b1c7e5b4f8] Running
	I0717 22:52:03.117234   53870 system_pods.go:74] duration metric: took 10.793064ms to wait for pod list to return data ...
	I0717 22:52:03.117247   53870 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:52:03.122227   53870 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:52:03.122275   53870 node_conditions.go:123] node cpu capacity is 2
	I0717 22:52:03.122294   53870 node_conditions.go:105] duration metric: took 5.039156ms to run NodePressure ...
	I0717 22:52:03.122322   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:52:03.337823   53870 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:52:03.342104   53870 retry.go:31] will retry after 190.852011ms: kubelet not initialised
	I0717 22:52:03.537705   53870 retry.go:31] will retry after 190.447443ms: kubelet not initialised
	I0717 22:52:03.735450   53870 retry.go:31] will retry after 294.278727ms: kubelet not initialised
	I0717 22:52:04.034965   53870 retry.go:31] will retry after 808.339075ms: kubelet not initialised
	I0717 22:52:04.847799   53870 retry.go:31] will retry after 1.685522396s: kubelet not initialised
	I0717 22:52:06.537765   53870 retry.go:31] will retry after 1.595238483s: kubelet not initialised
	I0717 22:52:04.377461   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:06.876135   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:09.090739   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:11.093234   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:08.583118   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:11.083446   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:08.139297   53870 retry.go:31] will retry after 4.170190829s: kubelet not initialised
	I0717 22:52:12.317346   53870 retry.go:31] will retry after 5.652204651s: kubelet not initialised
	I0717 22:52:09.374610   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:11.375332   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:13.590999   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:15.591041   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:13.583868   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:16.081948   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:13.376027   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:15.874857   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:17.876130   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:17.593544   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:20.092121   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:18.082068   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:20.083496   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:22.582358   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:17.975640   53870 retry.go:31] will retry after 6.695949238s: kubelet not initialised
	I0717 22:52:20.375494   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:22.882209   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:22.591705   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:25.090965   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:25.082268   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:27.582422   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:24.676746   53870 retry.go:31] will retry after 10.942784794s: kubelet not initialised
	I0717 22:52:25.374526   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:27.375728   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:27.591516   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:30.091872   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:30.081334   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:32.082535   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:29.874508   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:31.876648   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:32.592067   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:35.092067   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:34.082954   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:36.585649   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:35.625671   53870 retry.go:31] will retry after 20.23050626s: kubelet not initialised
	I0717 22:52:34.376118   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:36.875654   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:37.592201   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:40.091539   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:39.081430   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:41.082360   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:39.374867   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:41.375759   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:42.590417   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:44.591742   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:46.593256   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:43.083211   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:45.084404   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:47.085099   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:43.376030   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:45.873482   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:47.875479   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:49.092376   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:51.592430   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:49.582087   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:52.083003   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:49.878981   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:52.374685   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:54.090617   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:56.091597   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:54.583455   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:57.081342   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:55.864261   53870 kubeadm.go:787] kubelet initialised
	I0717 22:52:55.864281   53870 kubeadm.go:788] duration metric: took 52.526433839s waiting for restarted kubelet to initialise ...
	I0717 22:52:55.864287   53870 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:52:55.870685   53870 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.877709   53870 pod_ready.go:92] pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.877737   53870 pod_ready.go:81] duration metric: took 7.026411ms waiting for pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.877750   53870 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-vnldz" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.883932   53870 pod_ready.go:92] pod "coredns-5644d7b6d9-vnldz" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.883961   53870 pod_ready.go:81] duration metric: took 6.200731ms waiting for pod "coredns-5644d7b6d9-vnldz" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.883974   53870 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.889729   53870 pod_ready.go:92] pod "etcd-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.889749   53870 pod_ready.go:81] duration metric: took 5.767797ms waiting for pod "etcd-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.889757   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.895286   53870 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.895308   53870 pod_ready.go:81] duration metric: took 5.545198ms waiting for pod "kube-apiserver-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.895316   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.263125   53870 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:56.263153   53870 pod_ready.go:81] duration metric: took 367.829768ms waiting for pod "kube-controller-manager-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.263166   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vkjpn" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.663235   53870 pod_ready.go:92] pod "kube-proxy-vkjpn" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:56.663262   53870 pod_ready.go:81] duration metric: took 400.086969ms waiting for pod "kube-proxy-vkjpn" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.663276   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:57.061892   53870 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:57.061917   53870 pod_ready.go:81] duration metric: took 398.633591ms waiting for pod "kube-scheduler-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:57.061930   53870 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:54.374907   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:56.875242   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:58.092082   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:00.590626   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:59.081826   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:01.086158   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:59.469353   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:01.968383   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:59.374420   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:01.374640   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:02.595710   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:05.094211   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:03.582006   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:05.582348   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:07.582585   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:03.969801   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:06.469220   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:03.374665   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:05.375182   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:07.874673   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:07.593189   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:10.091253   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:10.083277   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:12.581195   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:08.973101   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:11.471187   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:10.375255   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:12.875038   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:12.593192   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:15.090204   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:17.091416   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:14.581962   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:17.082092   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:13.970246   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:16.469918   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:15.374678   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:17.375402   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:19.592518   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:22.090462   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:19.582582   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:21.582788   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:18.969975   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:21.471221   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:19.876416   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:22.377064   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:24.592012   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:26.593013   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:24.082409   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:26.581889   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:23.967680   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:25.969061   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:24.876092   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:26.876727   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:29.090937   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:31.092276   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:28.583371   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:30.588656   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:28.470667   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:30.969719   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:29.374066   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:31.375107   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.590361   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.591199   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.082794   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.583369   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.468669   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.468917   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:37.469656   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.873830   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.875551   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:38.091032   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:40.095610   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:38.083632   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:40.584069   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:39.970389   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:41.972121   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:38.374344   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:40.375117   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:42.873817   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:42.591348   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:44.591801   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:47.091463   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:43.092800   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:45.583147   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:44.468092   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:46.968583   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:44.875165   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:46.875468   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:49.592016   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:52.092191   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:48.082358   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:50.581430   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:52.581722   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:48.970562   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:51.469666   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:49.374655   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:51.374912   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:54.590857   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:57.090986   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:54.581979   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:57.081602   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:53.969845   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:56.470092   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:53.874630   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:56.374076   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:59.093019   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:01.590296   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:59.581481   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:02.081651   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:58.969243   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:00.969793   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:58.874500   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:00.875485   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:03.591663   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:06.091377   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:04.082661   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:06.581409   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:02.969900   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:05.469513   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:07.469630   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:03.374576   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:05.874492   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:07.876025   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:08.092299   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:10.591576   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:08.582962   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:11.081623   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:09.469674   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:11.970568   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:09.878298   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:12.375542   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:13.089815   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:15.091295   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:13.082485   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:15.582545   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:14.469264   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:16.970184   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:14.876188   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:17.375197   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:17.590457   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:19.590668   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:21.592281   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:18.082882   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:20.581232   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:22.581451   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:19.470007   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:21.972545   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:19.874905   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:21.876111   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.090912   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:26.091423   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.582104   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:27.082466   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.468612   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:26.468733   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.375195   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:26.375302   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:28.092426   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:30.590750   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:29.083200   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:31.581109   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:28.469411   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:30.474485   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:28.376063   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:30.874877   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:32.875720   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:32.591688   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:34.592382   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:37.091435   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:33.582072   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:35.582710   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:32.968863   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:34.969408   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:37.469461   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:35.375657   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:37.873420   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:39.091786   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:41.591723   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:38.082103   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:40.582480   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:39.470591   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:41.969425   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:39.876026   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:41.876450   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:44.090732   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:46.091209   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:43.082746   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:45.580745   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:47.581165   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:44.469624   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:46.469853   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:44.375526   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:46.874381   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:48.091542   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:50.591973   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:49.583795   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:52.084521   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:48.969202   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:50.969996   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:48.874772   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:50.876953   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:53.092284   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:55.591945   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:54.582260   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:56.582456   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:53.468921   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:55.469467   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:57.469588   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:53.375369   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:55.375834   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:57.875412   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:58.092340   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:00.593507   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:58.582790   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:01.082714   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:59.968899   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:01.970513   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:59.876100   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:02.377093   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:02.594240   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:05.091858   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:03.584934   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:06.082560   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:04.469605   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:06.470074   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:04.874495   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:06.874619   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:07.591151   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:09.594253   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:12.092136   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:08.082731   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:10.594934   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:08.970358   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:10.972021   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:08.875055   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:10.875177   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:11.360474   54248 pod_ready.go:81] duration metric: took 4m0.00020957s waiting for pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace to be "Ready" ...
	E0717 22:55:11.360506   54248 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:55:11.360523   54248 pod_ready.go:38] duration metric: took 4m12.083431067s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:55:11.360549   54248 kubeadm.go:640] restartCluster took 4m32.267522493s
	W0717 22:55:11.360621   54248 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 22:55:11.360653   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 22:55:14.094015   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:16.590201   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:13.082448   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:15.581674   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:17.582135   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:13.471096   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:15.970057   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:18.591981   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:21.091787   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:19.584462   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:22.082310   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:18.469828   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:20.970377   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:23.092278   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:25.594454   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:24.583377   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:27.082479   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:23.470427   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:25.473350   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:28.091878   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:30.092032   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:29.582576   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:31.584147   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:27.969045   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:30.468478   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:32.469942   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:32.591274   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:34.591477   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:37.089772   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:34.082460   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:36.082687   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:34.470431   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:36.470791   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:39.091253   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:41.091286   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:38.082836   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:40.581494   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:42.583634   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:38.969011   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:40.969922   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:43.092434   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:45.591302   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:45.083869   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:47.582454   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:43.468968   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:45.469340   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:47.471805   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:43.113858   54248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.753186356s)
	I0717 22:55:43.113920   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:55:43.128803   54248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:55:43.138891   54248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:55:43.148155   54248 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:55:43.148209   54248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 22:55:43.357368   54248 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:55:47.591967   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:50.092046   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:52.092670   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:50.081152   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:50.568456   54573 pod_ready.go:81] duration metric: took 4m0.000934324s waiting for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	E0717 22:55:50.568492   54573 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:55:50.568506   54573 pod_ready.go:38] duration metric: took 4m4.414427298s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:55:50.568531   54573 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:55:50.568581   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 22:55:50.568650   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 22:55:50.622016   54573 cri.go:89] found id: "c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:50.622048   54573 cri.go:89] found id: ""
	I0717 22:55:50.622058   54573 logs.go:284] 1 containers: [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f]
	I0717 22:55:50.622114   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.627001   54573 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 22:55:50.627065   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 22:55:50.665053   54573 cri.go:89] found id: "98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:50.665073   54573 cri.go:89] found id: ""
	I0717 22:55:50.665082   54573 logs.go:284] 1 containers: [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea]
	I0717 22:55:50.665143   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.670198   54573 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 22:55:50.670261   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 22:55:50.705569   54573 cri.go:89] found id: "acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:50.705595   54573 cri.go:89] found id: ""
	I0717 22:55:50.705604   54573 logs.go:284] 1 containers: [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266]
	I0717 22:55:50.705669   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.710494   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 22:55:50.710569   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 22:55:50.772743   54573 cri.go:89] found id: "692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:50.772768   54573 cri.go:89] found id: ""
	I0717 22:55:50.772776   54573 logs.go:284] 1 containers: [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629]
	I0717 22:55:50.772831   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.777741   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 22:55:50.777813   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 22:55:50.809864   54573 cri.go:89] found id: "9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:50.809892   54573 cri.go:89] found id: ""
	I0717 22:55:50.809903   54573 logs.go:284] 1 containers: [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567]
	I0717 22:55:50.809963   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.814586   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 22:55:50.814654   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 22:55:50.850021   54573 cri.go:89] found id: "f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:50.850047   54573 cri.go:89] found id: ""
	I0717 22:55:50.850056   54573 logs.go:284] 1 containers: [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c]
	I0717 22:55:50.850125   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.854615   54573 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 22:55:50.854685   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 22:55:50.893272   54573 cri.go:89] found id: ""
	I0717 22:55:50.893300   54573 logs.go:284] 0 containers: []
	W0717 22:55:50.893310   54573 logs.go:286] No container was found matching "kindnet"
	I0717 22:55:50.893318   54573 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 22:55:50.893377   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 22:55:50.926652   54573 cri.go:89] found id: "a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:50.926676   54573 cri.go:89] found id: "4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:50.926682   54573 cri.go:89] found id: ""
	I0717 22:55:50.926690   54573 logs.go:284] 2 containers: [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6]
	I0717 22:55:50.926747   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.931220   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.935745   54573 logs.go:123] Gathering logs for kubelet ...
	I0717 22:55:50.935772   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 22:55:51.002727   54573 logs.go:123] Gathering logs for kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] ...
	I0717 22:55:51.002760   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:51.046774   54573 logs.go:123] Gathering logs for kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] ...
	I0717 22:55:51.046811   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:51.081441   54573 logs.go:123] Gathering logs for storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] ...
	I0717 22:55:51.081472   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:51.119354   54573 logs.go:123] Gathering logs for CRI-O ...
	I0717 22:55:51.119394   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 22:55:51.710591   54573 logs.go:123] Gathering logs for kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] ...
	I0717 22:55:51.710634   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:51.758647   54573 logs.go:123] Gathering logs for storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] ...
	I0717 22:55:51.758679   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:51.792417   54573 logs.go:123] Gathering logs for container status ...
	I0717 22:55:51.792458   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 22:55:51.836268   54573 logs.go:123] Gathering logs for dmesg ...
	I0717 22:55:51.836302   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 22:55:51.852009   54573 logs.go:123] Gathering logs for describe nodes ...
	I0717 22:55:51.852038   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 22:55:52.018156   54573 logs.go:123] Gathering logs for kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] ...
	I0717 22:55:52.018191   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:52.061680   54573 logs.go:123] Gathering logs for etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] ...
	I0717 22:55:52.061723   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:52.105407   54573 logs.go:123] Gathering logs for coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] ...
	I0717 22:55:52.105437   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:49.969074   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:51.969157   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:54.934299   54248 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 22:55:54.934395   54248 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:55:54.934498   54248 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:55:54.934616   54248 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:55:54.934741   54248 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:55:54.934823   54248 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:55:54.936386   54248 out.go:204]   - Generating certificates and keys ...
	I0717 22:55:54.936475   54248 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:55:54.936548   54248 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:55:54.936643   54248 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:55:54.936719   54248 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 22:55:54.936803   54248 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:55:54.936871   54248 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 22:55:54.936947   54248 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 22:55:54.937023   54248 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:55:54.937125   54248 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:55:54.937219   54248 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:55:54.937269   54248 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 22:55:54.937333   54248 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:55:54.937395   54248 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:55:54.937460   54248 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:55:54.937551   54248 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:55:54.937620   54248 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:55:54.937744   54248 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:55:54.937846   54248 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:55:54.937894   54248 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:55:54.937990   54248 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:55:54.939409   54248 out.go:204]   - Booting up control plane ...
	I0717 22:55:54.939534   54248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:55:54.939640   54248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:55:54.939733   54248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:55:54.939867   54248 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:55:54.940059   54248 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:55:54.940157   54248 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504894 seconds
	I0717 22:55:54.940283   54248 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:55:54.940445   54248 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:55:54.940525   54248 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:55:54.940756   54248 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-571296 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:55:54.940829   54248 kubeadm.go:322] [bootstrap-token] Using token: zn3d72.w9x4plx1baw35867
	I0717 22:55:54.942338   54248 out.go:204]   - Configuring RBAC rules ...
	I0717 22:55:54.942484   54248 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:55:54.942583   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:55:54.942759   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:55:54.942920   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:55:54.943088   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:55:54.943207   54248 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:55:54.943365   54248 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:55:54.943433   54248 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:55:54.943527   54248 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:55:54.943541   54248 kubeadm.go:322] 
	I0717 22:55:54.943646   54248 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:55:54.943673   54248 kubeadm.go:322] 
	I0717 22:55:54.943765   54248 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:55:54.943774   54248 kubeadm.go:322] 
	I0717 22:55:54.943814   54248 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:55:54.943906   54248 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:55:54.943997   54248 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:55:54.944009   54248 kubeadm.go:322] 
	I0717 22:55:54.944107   54248 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 22:55:54.944121   54248 kubeadm.go:322] 
	I0717 22:55:54.944194   54248 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:55:54.944204   54248 kubeadm.go:322] 
	I0717 22:55:54.944277   54248 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:55:54.944390   54248 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:55:54.944472   54248 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:55:54.944479   54248 kubeadm.go:322] 
	I0717 22:55:54.944574   54248 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:55:54.944667   54248 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:55:54.944677   54248 kubeadm.go:322] 
	I0717 22:55:54.944778   54248 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zn3d72.w9x4plx1baw35867 \
	I0717 22:55:54.944924   54248 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:55:54.944959   54248 kubeadm.go:322] 	--control-plane 
	I0717 22:55:54.944965   54248 kubeadm.go:322] 
	I0717 22:55:54.945096   54248 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:55:54.945110   54248 kubeadm.go:322] 
	I0717 22:55:54.945206   54248 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zn3d72.w9x4plx1baw35867 \
	I0717 22:55:54.945367   54248 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:55:54.945384   54248 cni.go:84] Creating CNI manager for ""
	I0717 22:55:54.945396   54248 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:55:54.947694   54248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:55:54.092792   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:54.226690   54649 pod_ready.go:81] duration metric: took 4m0.00109908s waiting for pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace to be "Ready" ...
	E0717 22:55:54.226723   54649 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:55:54.226748   54649 pod_ready.go:38] duration metric: took 4m11.075490865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:55:54.226791   54649 kubeadm.go:640] restartCluster took 4m33.196357187s
	W0717 22:55:54.226860   54649 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 22:55:54.226891   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 22:55:54.639076   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:55:54.659284   54573 api_server.go:72] duration metric: took 4m16.00305446s to wait for apiserver process to appear ...
	I0717 22:55:54.659324   54573 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:55:54.659366   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 22:55:54.659437   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 22:55:54.698007   54573 cri.go:89] found id: "c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:54.698036   54573 cri.go:89] found id: ""
	I0717 22:55:54.698045   54573 logs.go:284] 1 containers: [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f]
	I0717 22:55:54.698104   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.704502   54573 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 22:55:54.704584   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 22:55:54.738722   54573 cri.go:89] found id: "98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:54.738752   54573 cri.go:89] found id: ""
	I0717 22:55:54.738761   54573 logs.go:284] 1 containers: [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea]
	I0717 22:55:54.738816   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.743815   54573 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 22:55:54.743888   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 22:55:54.789962   54573 cri.go:89] found id: "acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:54.789992   54573 cri.go:89] found id: ""
	I0717 22:55:54.790003   54573 logs.go:284] 1 containers: [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266]
	I0717 22:55:54.790061   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.796502   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 22:55:54.796577   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 22:55:54.840319   54573 cri.go:89] found id: "692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:54.840349   54573 cri.go:89] found id: ""
	I0717 22:55:54.840358   54573 logs.go:284] 1 containers: [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629]
	I0717 22:55:54.840418   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.847001   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 22:55:54.847074   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 22:55:54.900545   54573 cri.go:89] found id: "9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:54.900571   54573 cri.go:89] found id: ""
	I0717 22:55:54.900578   54573 logs.go:284] 1 containers: [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567]
	I0717 22:55:54.900639   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.905595   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 22:55:54.905703   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 22:55:54.940386   54573 cri.go:89] found id: "f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:54.940405   54573 cri.go:89] found id: ""
	I0717 22:55:54.940414   54573 logs.go:284] 1 containers: [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c]
	I0717 22:55:54.940471   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.947365   54573 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 22:55:54.947444   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 22:55:54.993902   54573 cri.go:89] found id: ""
	I0717 22:55:54.993930   54573 logs.go:284] 0 containers: []
	W0717 22:55:54.993942   54573 logs.go:286] No container was found matching "kindnet"
	I0717 22:55:54.993950   54573 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 22:55:54.994019   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 22:55:55.040159   54573 cri.go:89] found id: "a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:55.040184   54573 cri.go:89] found id: "4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:55.040190   54573 cri.go:89] found id: ""
	I0717 22:55:55.040198   54573 logs.go:284] 2 containers: [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6]
	I0717 22:55:55.040265   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:55.045151   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:55.050805   54573 logs.go:123] Gathering logs for kubelet ...
	I0717 22:55:55.050831   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 22:55:55.123810   54573 logs.go:123] Gathering logs for describe nodes ...
	I0717 22:55:55.123845   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 22:55:55.306589   54573 logs.go:123] Gathering logs for kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] ...
	I0717 22:55:55.306623   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:55.351035   54573 logs.go:123] Gathering logs for kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] ...
	I0717 22:55:55.351083   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:55.416647   54573 logs.go:123] Gathering logs for storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] ...
	I0717 22:55:55.416705   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:55.460413   54573 logs.go:123] Gathering logs for CRI-O ...
	I0717 22:55:55.460452   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 22:55:56.034198   54573 logs.go:123] Gathering logs for container status ...
	I0717 22:55:56.034238   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 22:55:56.073509   54573 logs.go:123] Gathering logs for dmesg ...
	I0717 22:55:56.073552   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 22:55:56.086385   54573 logs.go:123] Gathering logs for kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] ...
	I0717 22:55:56.086413   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:56.132057   54573 logs.go:123] Gathering logs for etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] ...
	I0717 22:55:56.132087   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:56.176634   54573 logs.go:123] Gathering logs for coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] ...
	I0717 22:55:56.176663   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:56.213415   54573 logs.go:123] Gathering logs for kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] ...
	I0717 22:55:56.213451   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:56.248868   54573 logs.go:123] Gathering logs for storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] ...
	I0717 22:55:56.248912   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:53.969902   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:56.470299   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:54.949399   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:55:54.984090   54248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:55:55.014819   54248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:55:55.014950   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:55.015014   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=embed-certs-571296 minikube.k8s.io/updated_at=2023_07_17T22_55_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:55.558851   54248 ops.go:34] apiserver oom_adj: -16
	I0717 22:55:55.558970   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:56.177713   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:56.677742   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:57.177957   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:57.677787   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:58.793638   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:55:58.806705   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0717 22:55:58.808953   54573 api_server.go:141] control plane version: v1.27.3
	I0717 22:55:58.808972   54573 api_server.go:131] duration metric: took 4.149642061s to wait for apiserver health ...
	I0717 22:55:58.808979   54573 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:55:58.808999   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 22:55:58.809042   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 22:55:58.840945   54573 cri.go:89] found id: "c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:58.840965   54573 cri.go:89] found id: ""
	I0717 22:55:58.840972   54573 logs.go:284] 1 containers: [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f]
	I0717 22:55:58.841028   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.845463   54573 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 22:55:58.845557   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 22:55:58.877104   54573 cri.go:89] found id: "98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:58.877134   54573 cri.go:89] found id: ""
	I0717 22:55:58.877143   54573 logs.go:284] 1 containers: [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea]
	I0717 22:55:58.877199   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.881988   54573 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 22:55:58.882060   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 22:55:58.920491   54573 cri.go:89] found id: "acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:58.920520   54573 cri.go:89] found id: ""
	I0717 22:55:58.920530   54573 logs.go:284] 1 containers: [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266]
	I0717 22:55:58.920588   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.925170   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 22:55:58.925239   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 22:55:58.970908   54573 cri.go:89] found id: "692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:58.970928   54573 cri.go:89] found id: ""
	I0717 22:55:58.970937   54573 logs.go:284] 1 containers: [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629]
	I0717 22:55:58.970988   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.976950   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 22:55:58.977005   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 22:55:59.007418   54573 cri.go:89] found id: "9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:59.007438   54573 cri.go:89] found id: ""
	I0717 22:55:59.007445   54573 logs.go:284] 1 containers: [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567]
	I0717 22:55:59.007550   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.012222   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 22:55:59.012279   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 22:55:59.048939   54573 cri.go:89] found id: "f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:59.048960   54573 cri.go:89] found id: ""
	I0717 22:55:59.048968   54573 logs.go:284] 1 containers: [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c]
	I0717 22:55:59.049023   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.053335   54573 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 22:55:59.053400   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 22:55:59.084168   54573 cri.go:89] found id: ""
	I0717 22:55:59.084198   54573 logs.go:284] 0 containers: []
	W0717 22:55:59.084208   54573 logs.go:286] No container was found matching "kindnet"
	I0717 22:55:59.084221   54573 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 22:55:59.084270   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 22:55:59.117213   54573 cri.go:89] found id: "a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:59.117237   54573 cri.go:89] found id: "4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:59.117244   54573 cri.go:89] found id: ""
	I0717 22:55:59.117252   54573 logs.go:284] 2 containers: [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6]
	I0717 22:55:59.117311   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.122816   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.127074   54573 logs.go:123] Gathering logs for dmesg ...
	I0717 22:55:59.127095   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 22:55:59.142525   54573 logs.go:123] Gathering logs for kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] ...
	I0717 22:55:59.142557   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:59.190652   54573 logs.go:123] Gathering logs for kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] ...
	I0717 22:55:59.190690   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:59.231512   54573 logs.go:123] Gathering logs for kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] ...
	I0717 22:55:59.231547   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:59.280732   54573 logs.go:123] Gathering logs for storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] ...
	I0717 22:55:59.280767   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:59.318213   54573 logs.go:123] Gathering logs for CRI-O ...
	I0717 22:55:59.318237   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 22:55:59.872973   54573 logs.go:123] Gathering logs for container status ...
	I0717 22:55:59.873017   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 22:55:59.911891   54573 logs.go:123] Gathering logs for kubelet ...
	I0717 22:55:59.911918   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 22:55:59.976450   54573 logs.go:123] Gathering logs for describe nodes ...
	I0717 22:55:59.976483   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 22:56:00.099556   54573 logs.go:123] Gathering logs for etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] ...
	I0717 22:56:00.099592   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:56:00.145447   54573 logs.go:123] Gathering logs for coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] ...
	I0717 22:56:00.145479   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:56:00.181246   54573 logs.go:123] Gathering logs for kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] ...
	I0717 22:56:00.181277   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:56:00.221127   54573 logs.go:123] Gathering logs for storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] ...
	I0717 22:56:00.221150   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:56:02.761729   54573 system_pods.go:59] 8 kube-system pods found
	I0717 22:56:02.761758   54573 system_pods.go:61] "coredns-5d78c9869d-2mpst" [7516b57f-a4cb-4e2f-995e-8e063bed22ae] Running
	I0717 22:56:02.761765   54573 system_pods.go:61] "etcd-no-preload-935524" [b663c4f9-d98e-457d-b511-435bea5e9525] Running
	I0717 22:56:02.761772   54573 system_pods.go:61] "kube-apiserver-no-preload-935524" [fb6f55fc-8705-46aa-a23c-e7870e52e542] Running
	I0717 22:56:02.761778   54573 system_pods.go:61] "kube-controller-manager-no-preload-935524" [37d43d22-e857-4a9b-b0cb-a9fc39931baa] Running
	I0717 22:56:02.761783   54573 system_pods.go:61] "kube-proxy-qhp66" [8bc95955-b7ba-41e3-ac67-604a9695f784] Running
	I0717 22:56:02.761790   54573 system_pods.go:61] "kube-scheduler-no-preload-935524" [86fef4e0-b156-421a-8d53-0d34cae2cdb3] Running
	I0717 22:56:02.761800   54573 system_pods.go:61] "metrics-server-74d5c6b9c-tlbpl" [7c478efe-4435-45dd-a688-745872fc2918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:56:02.761809   54573 system_pods.go:61] "storage-provisioner" [85812d54-7a57-430b-991e-e301f123a86a] Running
	I0717 22:56:02.761823   54573 system_pods.go:74] duration metric: took 3.952838173s to wait for pod list to return data ...
	I0717 22:56:02.761837   54573 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:56:02.764526   54573 default_sa.go:45] found service account: "default"
	I0717 22:56:02.764547   54573 default_sa.go:55] duration metric: took 2.700233ms for default service account to be created ...
	I0717 22:56:02.764556   54573 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:56:02.770288   54573 system_pods.go:86] 8 kube-system pods found
	I0717 22:56:02.770312   54573 system_pods.go:89] "coredns-5d78c9869d-2mpst" [7516b57f-a4cb-4e2f-995e-8e063bed22ae] Running
	I0717 22:56:02.770318   54573 system_pods.go:89] "etcd-no-preload-935524" [b663c4f9-d98e-457d-b511-435bea5e9525] Running
	I0717 22:56:02.770323   54573 system_pods.go:89] "kube-apiserver-no-preload-935524" [fb6f55fc-8705-46aa-a23c-e7870e52e542] Running
	I0717 22:56:02.770327   54573 system_pods.go:89] "kube-controller-manager-no-preload-935524" [37d43d22-e857-4a9b-b0cb-a9fc39931baa] Running
	I0717 22:56:02.770330   54573 system_pods.go:89] "kube-proxy-qhp66" [8bc95955-b7ba-41e3-ac67-604a9695f784] Running
	I0717 22:56:02.770334   54573 system_pods.go:89] "kube-scheduler-no-preload-935524" [86fef4e0-b156-421a-8d53-0d34cae2cdb3] Running
	I0717 22:56:02.770340   54573 system_pods.go:89] "metrics-server-74d5c6b9c-tlbpl" [7c478efe-4435-45dd-a688-745872fc2918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:56:02.770346   54573 system_pods.go:89] "storage-provisioner" [85812d54-7a57-430b-991e-e301f123a86a] Running
	I0717 22:56:02.770354   54573 system_pods.go:126] duration metric: took 5.793179ms to wait for k8s-apps to be running ...
	I0717 22:56:02.770362   54573 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:56:02.770410   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:02.786132   54573 system_svc.go:56] duration metric: took 15.760975ms WaitForService to wait for kubelet.
	I0717 22:56:02.786161   54573 kubeadm.go:581] duration metric: took 4m24.129949995s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:56:02.786182   54573 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:56:02.789957   54573 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:56:02.789978   54573 node_conditions.go:123] node cpu capacity is 2
	I0717 22:56:02.789988   54573 node_conditions.go:105] duration metric: took 3.802348ms to run NodePressure ...
	I0717 22:56:02.789999   54573 start.go:228] waiting for startup goroutines ...
	I0717 22:56:02.790008   54573 start.go:233] waiting for cluster config update ...
	I0717 22:56:02.790021   54573 start.go:242] writing updated cluster config ...
	I0717 22:56:02.790308   54573 ssh_runner.go:195] Run: rm -f paused
	I0717 22:56:02.840154   54573 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 22:56:02.843243   54573 out.go:177] * Done! kubectl is now configured to use "no-preload-935524" cluster and "default" namespace by default
	I0717 22:55:58.471229   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:00.969263   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:58.177892   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:58.677211   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:59.177916   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:59.678088   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:00.177933   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:00.678096   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:01.177184   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:01.677152   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:02.177561   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:02.677947   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:02.970089   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:05.470783   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:03.177870   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:03.677715   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:04.177238   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:04.677261   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:05.177220   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:05.678164   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:06.177948   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:06.677392   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:07.177167   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:07.678131   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:07.945881   54248 kubeadm.go:1081] duration metric: took 12.930982407s to wait for elevateKubeSystemPrivileges.
	I0717 22:56:07.945928   54248 kubeadm.go:406] StartCluster complete in 5m28.89261834s
	I0717 22:56:07.945958   54248 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:07.946058   54248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:56:07.948004   54248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:07.948298   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:56:07.948538   54248 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:56:07.948628   54248 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-571296"
	I0717 22:56:07.948639   54248 config.go:182] Loaded profile config "embed-certs-571296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:56:07.948657   54248 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-571296"
	W0717 22:56:07.948669   54248 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:56:07.948687   54248 addons.go:69] Setting default-storageclass=true in profile "embed-certs-571296"
	I0717 22:56:07.948708   54248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-571296"
	I0717 22:56:07.948713   54248 host.go:66] Checking if "embed-certs-571296" exists ...
	I0717 22:56:07.949078   54248 addons.go:69] Setting metrics-server=true in profile "embed-certs-571296"
	I0717 22:56:07.949100   54248 addons.go:231] Setting addon metrics-server=true in "embed-certs-571296"
	I0717 22:56:07.949101   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	W0717 22:56:07.949107   54248 addons.go:240] addon metrics-server should already be in state true
	I0717 22:56:07.949126   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.949148   54248 host.go:66] Checking if "embed-certs-571296" exists ...
	I0717 22:56:07.949361   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.949390   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.949481   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.949508   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.967136   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I0717 22:56:07.967705   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.967874   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43925
	I0717 22:56:07.968286   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.968317   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.968395   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.968741   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.969000   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.969019   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.969056   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:07.969416   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.969964   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.969993   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.970220   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I0717 22:56:07.970682   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.971172   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.971194   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.971603   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.972617   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.972655   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.988352   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38131
	I0717 22:56:07.988872   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.989481   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.989507   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.989913   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.990198   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:07.992174   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:56:07.992359   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34283
	I0717 22:56:07.993818   54248 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:56:07.995350   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:56:07.995373   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:56:07.995393   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:56:07.992931   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.995909   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.995933   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.996276   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.996424   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:07.998630   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:56:08.000660   54248 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:07.999385   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:07.999983   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:56:08.002498   54248 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:08.002510   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:56:08.002529   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:56:08.002556   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:56:08.002587   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:56:08.002626   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.002714   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:56:08.002874   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:56:08.003290   54248 addons.go:231] Setting addon default-storageclass=true in "embed-certs-571296"
	W0717 22:56:08.003311   54248 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:56:08.003340   54248 host.go:66] Checking if "embed-certs-571296" exists ...
	I0717 22:56:08.003736   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:08.003763   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:08.005771   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.006163   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:56:08.006194   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.006393   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:56:08.006560   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:56:08.006744   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:56:08.006890   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:56:08.025042   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0717 22:56:08.025743   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:08.026232   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:08.026252   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:08.026732   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:08.027295   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:08.027340   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:08.044326   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40863
	I0717 22:56:08.044743   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:08.045285   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:08.045309   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:08.045686   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:08.045900   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:08.047695   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:56:08.047962   54248 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:08.047980   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:56:08.048000   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:56:08.050685   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.051084   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:56:08.051115   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.051376   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:56:08.051561   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:56:08.051762   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:56:08.051880   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:56:08.221022   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:56:08.221057   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:56:08.262777   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:56:08.286077   54248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:08.301703   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:56:08.301728   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:56:08.314524   54248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:08.370967   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:08.370989   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:56:08.585011   54248 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-571296" context rescaled to 1 replicas
	I0717 22:56:08.585061   54248 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:56:08.587143   54248 out.go:177] * Verifying Kubernetes components...
	I0717 22:56:08.588842   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:08.666555   54248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:10.506154   54248 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.243338067s)
	I0717 22:56:10.506244   54248 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0717 22:56:11.016648   54248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.730514867s)
	I0717 22:56:11.016699   54248 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.427824424s)
	I0717 22:56:11.016659   54248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.702100754s)
	I0717 22:56:11.016728   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.016733   54248 node_ready.go:35] waiting up to 6m0s for node "embed-certs-571296" to be "Ready" ...
	I0717 22:56:11.016742   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.016707   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.016862   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017139   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.017150   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017165   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.017168   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017175   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.017177   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.017183   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.017186   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017196   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.017242   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017409   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017425   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.017443   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.017452   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017571   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017600   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.018689   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.018706   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.018703   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.043490   54248 node_ready.go:49] node "embed-certs-571296" has status "Ready":"True"
	I0717 22:56:11.043511   54248 node_ready.go:38] duration metric: took 26.766819ms waiting for node "embed-certs-571296" to be "Ready" ...
	I0717 22:56:11.043518   54248 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:56:11.057095   54248 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-6ljtn" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:11.116641   54248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.450034996s)
	I0717 22:56:11.116706   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.116724   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.117015   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.117034   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.117046   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.117058   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.117341   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.117389   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.117408   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.117427   54248 addons.go:467] Verifying addon metrics-server=true in "embed-certs-571296"
	I0717 22:56:11.119741   54248 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 22:56:07.979850   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:10.471118   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:12.472257   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:11.122047   54248 addons.go:502] enable addons completed in 3.173503334s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 22:56:12.605075   54248 pod_ready.go:92] pod "coredns-5d78c9869d-6ljtn" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.605111   54248 pod_ready.go:81] duration metric: took 1.547984916s waiting for pod "coredns-5d78c9869d-6ljtn" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.605126   54248 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-tq27r" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.619682   54248 pod_ready.go:92] pod "coredns-5d78c9869d-tq27r" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.619710   54248 pod_ready.go:81] duration metric: took 14.576786ms waiting for pod "coredns-5d78c9869d-tq27r" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.619722   54248 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.628850   54248 pod_ready.go:92] pod "etcd-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.628878   54248 pod_ready.go:81] duration metric: took 9.147093ms waiting for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.628889   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.641360   54248 pod_ready.go:92] pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.641381   54248 pod_ready.go:81] duration metric: took 12.485183ms waiting for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.641391   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.656634   54248 pod_ready.go:92] pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.656663   54248 pod_ready.go:81] duration metric: took 15.264878ms waiting for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.656677   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xjpds" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:14.480168   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:16.969340   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:13.530098   54248 pod_ready.go:92] pod "kube-proxy-xjpds" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:13.530129   54248 pod_ready.go:81] duration metric: took 873.444575ms waiting for pod "kube-proxy-xjpds" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:13.530144   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:13.821592   54248 pod_ready.go:92] pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:13.821615   54248 pod_ready.go:81] duration metric: took 291.46393ms waiting for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:13.821625   54248 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:16.228210   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:19.470498   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:21.969531   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:18.228289   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:20.228420   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:22.228472   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:26.250616   54649 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.023698231s)
	I0717 22:56:26.250690   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:26.264095   54649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:56:26.274295   54649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:56:26.284265   54649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:56:26.284332   54649 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 22:56:26.341601   54649 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 22:56:26.341719   54649 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:56:26.507992   54649 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:56:26.508194   54649 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:56:26.508344   54649 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:56:26.684682   54649 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:56:26.686603   54649 out.go:204]   - Generating certificates and keys ...
	I0717 22:56:26.686753   54649 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:56:26.686833   54649 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:56:26.686963   54649 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:56:26.687386   54649 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 22:56:26.687802   54649 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:56:26.688484   54649 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 22:56:26.689007   54649 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 22:56:26.689618   54649 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:56:26.690234   54649 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:56:26.690845   54649 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:56:26.691391   54649 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 22:56:26.691484   54649 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:56:26.793074   54649 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:56:26.956354   54649 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:56:27.033560   54649 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:56:27.222598   54649 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:56:27.242695   54649 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:56:27.243923   54649 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:56:27.244009   54649 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:56:27.382359   54649 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:56:27.385299   54649 out.go:204]   - Booting up control plane ...
	I0717 22:56:27.385459   54649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:56:27.385595   54649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:56:27.385699   54649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:56:27.386230   54649 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:56:27.388402   54649 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:56:24.469634   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:26.470480   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:24.231654   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:26.728390   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:28.471360   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:30.493443   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:28.728821   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:30.729474   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:32.731419   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:35.894189   54649 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505577 seconds
	I0717 22:56:35.894298   54649 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:56:35.922569   54649 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:56:36.459377   54649 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:56:36.459628   54649 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-504828 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:56:36.981248   54649 kubeadm.go:322] [bootstrap-token] Using token: aq0fl5.e7xnmbjqmeipfdlw
	I0717 22:56:36.983221   54649 out.go:204]   - Configuring RBAC rules ...
	I0717 22:56:36.983401   54649 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:56:37.001576   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:56:37.012679   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:56:37.018002   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:56:37.025356   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:56:37.030822   54649 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:56:37.049741   54649 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:56:37.309822   54649 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:56:37.414906   54649 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:56:37.414947   54649 kubeadm.go:322] 
	I0717 22:56:37.415023   54649 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:56:37.415035   54649 kubeadm.go:322] 
	I0717 22:56:37.415135   54649 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:56:37.415145   54649 kubeadm.go:322] 
	I0717 22:56:37.415190   54649 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:56:37.415290   54649 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:56:37.415373   54649 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:56:37.415383   54649 kubeadm.go:322] 
	I0717 22:56:37.415495   54649 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 22:56:37.415529   54649 kubeadm.go:322] 
	I0717 22:56:37.415593   54649 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:56:37.415602   54649 kubeadm.go:322] 
	I0717 22:56:37.415677   54649 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:56:37.415755   54649 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:56:37.415892   54649 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:56:37.415904   54649 kubeadm.go:322] 
	I0717 22:56:37.416034   54649 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:56:37.416151   54649 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:56:37.416172   54649 kubeadm.go:322] 
	I0717 22:56:37.416306   54649 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token aq0fl5.e7xnmbjqmeipfdlw \
	I0717 22:56:37.416451   54649 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:56:37.416478   54649 kubeadm.go:322] 	--control-plane 
	I0717 22:56:37.416487   54649 kubeadm.go:322] 
	I0717 22:56:37.416596   54649 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:56:37.416606   54649 kubeadm.go:322] 
	I0717 22:56:37.416708   54649 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token aq0fl5.e7xnmbjqmeipfdlw \
	I0717 22:56:37.416850   54649 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:56:37.417385   54649 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:56:37.417413   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:56:37.417426   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:56:37.419367   54649 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:56:37.421047   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:56:37.456430   54649 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:56:37.520764   54649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:56:37.520861   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:37.520877   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=default-k8s-diff-port-504828 minikube.k8s.io/updated_at=2023_07_17T22_56_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:32.970043   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:35.469085   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:35.257714   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:37.730437   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:37.914888   54649 ops.go:34] apiserver oom_adj: -16
	I0717 22:56:37.914920   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:38.508471   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:39.008147   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:39.508371   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:40.008059   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:40.508319   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:41.008945   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:41.507958   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:42.008509   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:42.508920   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:37.969711   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:39.970230   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:42.468790   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:40.227771   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:42.228268   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:43.008542   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:43.508809   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:44.008922   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:44.508771   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:45.008681   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:45.507925   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:46.008078   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:46.508950   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:47.008902   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:47.508705   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:44.470199   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:46.969467   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:44.728843   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:46.729321   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:48.008736   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:48.508008   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:49.008524   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:49.508783   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:50.008620   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:50.508131   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:50.675484   54649 kubeadm.go:1081] duration metric: took 13.154682677s to wait for elevateKubeSystemPrivileges.
	I0717 22:56:50.675522   54649 kubeadm.go:406] StartCluster complete in 5m29.688096626s
	I0717 22:56:50.675542   54649 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:50.675625   54649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:56:50.678070   54649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:50.678358   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:56:50.678397   54649 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:56:50.678485   54649 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-504828"
	I0717 22:56:50.678504   54649 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-504828"
	I0717 22:56:50.678504   54649 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-504828"
	W0717 22:56:50.678515   54649 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:56:50.678526   54649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-504828"
	I0717 22:56:50.678537   54649 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-504828"
	I0717 22:56:50.678557   54649 host.go:66] Checking if "default-k8s-diff-port-504828" exists ...
	I0717 22:56:50.678561   54649 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-504828"
	W0717 22:56:50.678571   54649 addons.go:240] addon metrics-server should already be in state true
	I0717 22:56:50.678630   54649 host.go:66] Checking if "default-k8s-diff-port-504828" exists ...
	I0717 22:56:50.678570   54649 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:56:50.678961   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.678995   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.679011   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.679039   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.678962   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.679094   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.696229   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0717 22:56:50.696669   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.697375   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.697414   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.697831   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.698436   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.698474   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.698998   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
	I0717 22:56:50.699168   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41135
	I0717 22:56:50.699382   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.699530   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.699812   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.699824   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.700021   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.700044   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.700219   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.700385   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.700570   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.700748   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.700785   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.715085   54649 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-504828"
	W0717 22:56:50.715119   54649 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:56:50.715149   54649 host.go:66] Checking if "default-k8s-diff-port-504828" exists ...
	I0717 22:56:50.715547   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.715580   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.715831   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41743
	I0717 22:56:50.716347   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.716905   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.716921   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.717285   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.717334   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41035
	I0717 22:56:50.717493   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.717699   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.718238   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.718257   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.718580   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.718843   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.719486   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:56:50.721699   54649 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:56:50.723464   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:56:50.723484   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:56:50.720832   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:56:50.723509   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:56:50.725600   54649 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:50.728061   54649 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:50.726758   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.727455   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:56:50.728105   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:56:50.728133   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:56:50.728134   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:56:50.728166   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.728380   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:56:50.728785   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:56:50.728938   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:56:50.731891   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.732348   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:56:50.732379   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.732589   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:56:50.732793   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:56:50.732974   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:56:50.733113   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:56:50.741098   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0717 22:56:50.741744   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.742386   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.742410   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.742968   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.743444   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.743490   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.759985   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38185
	I0717 22:56:50.760547   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.761145   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.761171   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.761598   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.761779   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.763276   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:56:50.763545   54649 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:50.763559   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:56:50.763574   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:56:50.766525   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.766964   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:56:50.766995   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.767254   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:56:50.767444   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:56:50.767636   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:56:50.767803   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:56:50.963671   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:56:50.963698   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:56:50.982828   54649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:50.985884   54649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:50.989077   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:56:51.020140   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:56:51.020174   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:56:51.094548   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:51.094574   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:56:51.185896   54649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:51.238666   54649 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-504828" context rescaled to 1 replicas
	I0717 22:56:51.238704   54649 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:56:51.241792   54649 out.go:177] * Verifying Kubernetes components...
	I0717 22:56:51.243720   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:49.470925   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:51.970366   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:48.732421   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:50.742608   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:52.980991   54649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.998121603s)
	I0717 22:56:52.981060   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:52.981078   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:52.981422   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:52.981424   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:52.981460   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:52.981472   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:52.981486   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:52.981815   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:52.981906   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:52.981923   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:52.981962   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:52.981979   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:52.982328   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:52.982335   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:52.982352   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.384207   54649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.398283926s)
	I0717 22:56:53.384259   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.384263   54649 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.39515958s)
	I0717 22:56:53.384272   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.384280   54649 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0717 22:56:53.384588   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:53.384664   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.384680   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.384694   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.384711   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.385419   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.385438   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.385446   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:53.810615   54649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.624668019s)
	I0717 22:56:53.810613   54649 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.5668435s)
	I0717 22:56:53.810690   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.810712   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.810717   54649 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-504828" to be "Ready" ...
	I0717 22:56:53.811092   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:53.811172   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.811191   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.811209   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.811223   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.811501   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.811519   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.811529   54649 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-504828"
	I0717 22:56:53.813588   54649 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 22:56:53.815209   54649 addons.go:502] enable addons completed in 3.136812371s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 22:56:53.848049   54649 node_ready.go:49] node "default-k8s-diff-port-504828" has status "Ready":"True"
	I0717 22:56:53.848070   54649 node_ready.go:38] duration metric: took 37.336626ms waiting for node "default-k8s-diff-port-504828" to be "Ready" ...
	I0717 22:56:53.848078   54649 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:56:53.869392   54649 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-rqcjj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.922409   54649 pod_ready.go:92] pod "coredns-5d78c9869d-rqcjj" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.922433   54649 pod_ready.go:81] duration metric: took 2.05301467s waiting for pod "coredns-5d78c9869d-rqcjj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.922442   54649 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-xz4mj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.930140   54649 pod_ready.go:92] pod "coredns-5d78c9869d-xz4mj" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.930162   54649 pod_ready.go:81] duration metric: took 7.714745ms waiting for pod "coredns-5d78c9869d-xz4mj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.930171   54649 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.938968   54649 pod_ready.go:92] pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.938994   54649 pod_ready.go:81] duration metric: took 8.813777ms waiting for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.939006   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.950100   54649 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.950127   54649 pod_ready.go:81] duration metric: took 11.110719ms waiting for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.950141   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.956205   54649 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.956228   54649 pod_ready.go:81] duration metric: took 6.078268ms waiting for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.956240   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmtc8" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.318975   54649 pod_ready.go:92] pod "kube-proxy-nmtc8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:56.319002   54649 pod_ready.go:81] duration metric: took 362.754902ms waiting for pod "kube-proxy-nmtc8" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.319012   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.725010   54649 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:56.725042   54649 pod_ready.go:81] duration metric: took 406.022192ms waiting for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.725059   54649 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:53.971176   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:56.468730   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:57.063020   53870 pod_ready.go:81] duration metric: took 4m0.001070587s waiting for pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace to be "Ready" ...
	E0717 22:56:57.063061   53870 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:56:57.063088   53870 pod_ready.go:38] duration metric: took 4m1.198793286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:56:57.063114   53870 kubeadm.go:640] restartCluster took 5m14.33125167s
	W0717 22:56:57.063164   53870 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 22:56:57.063188   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 22:56:53.230170   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:55.230713   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:57.729746   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:59.128445   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:01.628013   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:59.730555   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:02.228533   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:03.628469   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:06.127096   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:04.228878   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:06.229004   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:08.128257   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:10.128530   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:12.128706   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:10.086799   53870 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.023585108s)
	I0717 22:57:10.086877   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:57:10.102476   53870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:57:10.112904   53870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:57:10.123424   53870 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:57:10.123471   53870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0717 22:57:10.352747   53870 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:57:08.232655   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:10.730595   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:14.129308   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:16.627288   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:13.230023   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:15.730720   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:18.628332   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:20.629305   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:18.227910   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:20.228411   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:22.230069   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:23.708206   53870 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 22:57:23.708283   53870 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:57:23.708382   53870 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:57:23.708529   53870 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:57:23.708651   53870 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:57:23.708789   53870 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:57:23.708916   53870 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:57:23.708988   53870 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 22:57:23.709078   53870 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:57:23.710652   53870 out.go:204]   - Generating certificates and keys ...
	I0717 22:57:23.710759   53870 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:57:23.710840   53870 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:57:23.710959   53870 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:57:23.711058   53870 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 22:57:23.711156   53870 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:57:23.711234   53870 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 22:57:23.711314   53870 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 22:57:23.711415   53870 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:57:23.711522   53870 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:57:23.711635   53870 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:57:23.711697   53870 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 22:57:23.711776   53870 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:57:23.711831   53870 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:57:23.711892   53870 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:57:23.711978   53870 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:57:23.712048   53870 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:57:23.712136   53870 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:57:23.713799   53870 out.go:204]   - Booting up control plane ...
	I0717 22:57:23.713909   53870 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:57:23.714033   53870 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:57:23.714145   53870 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:57:23.714268   53870 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:57:23.714418   53870 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:57:23.714483   53870 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004162 seconds
	I0717 22:57:23.714656   53870 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:57:23.714846   53870 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:57:23.714929   53870 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:57:23.715088   53870 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-332820 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 22:57:23.715170   53870 kubeadm.go:322] [bootstrap-token] Using token: sjemvm.5nuhmbx5uh7jm9fo
	I0717 22:57:23.716846   53870 out.go:204]   - Configuring RBAC rules ...
	I0717 22:57:23.716937   53870 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:57:23.717067   53870 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:57:23.717210   53870 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:57:23.717333   53870 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:57:23.717414   53870 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:57:23.717456   53870 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:57:23.717494   53870 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:57:23.717501   53870 kubeadm.go:322] 
	I0717 22:57:23.717564   53870 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:57:23.717571   53870 kubeadm.go:322] 
	I0717 22:57:23.717636   53870 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:57:23.717641   53870 kubeadm.go:322] 
	I0717 22:57:23.717662   53870 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:57:23.717733   53870 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:57:23.717783   53870 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:57:23.717791   53870 kubeadm.go:322] 
	I0717 22:57:23.717839   53870 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:57:23.717946   53870 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:57:23.718040   53870 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:57:23.718052   53870 kubeadm.go:322] 
	I0717 22:57:23.718172   53870 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0717 22:57:23.718289   53870 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:57:23.718299   53870 kubeadm.go:322] 
	I0717 22:57:23.718373   53870 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sjemvm.5nuhmbx5uh7jm9fo \
	I0717 22:57:23.718476   53870 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:57:23.718525   53870 kubeadm.go:322]     --control-plane 	  
	I0717 22:57:23.718539   53870 kubeadm.go:322] 
	I0717 22:57:23.718624   53870 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:57:23.718631   53870 kubeadm.go:322] 
	I0717 22:57:23.718703   53870 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sjemvm.5nuhmbx5uh7jm9fo \
	I0717 22:57:23.718812   53870 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:57:23.718825   53870 cni.go:84] Creating CNI manager for ""
	I0717 22:57:23.718834   53870 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:57:23.720891   53870 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:57:23.128941   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:25.129405   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:27.129595   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:23.722935   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:57:23.738547   53870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:57:23.764002   53870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:57:23.764109   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=old-k8s-version-332820 minikube.k8s.io/updated_at=2023_07_17T22_57_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:23.764127   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:23.835900   53870 ops.go:34] apiserver oom_adj: -16
	I0717 22:57:24.015975   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:24.622866   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:25.122754   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:25.622733   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:26.123442   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:26.623190   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:27.123191   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:27.622408   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:24.729678   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:26.730278   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:29.629588   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:32.130357   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:28.122555   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:28.622771   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:29.122717   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:29.622760   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:30.123186   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:30.622731   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:31.122724   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:31.622957   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:32.122775   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:32.622552   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:29.228462   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:31.232382   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:34.629160   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:37.128209   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:33.122703   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:33.623262   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:34.122574   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:34.623130   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:35.122819   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:35.622426   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:36.123262   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:36.622474   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:37.122820   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:37.623414   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:33.244514   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:35.735391   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:38.123076   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:38.622497   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:39.122826   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:39.220042   53870 kubeadm.go:1081] duration metric: took 15.45599881s to wait for elevateKubeSystemPrivileges.
	I0717 22:57:39.220076   53870 kubeadm.go:406] StartCluster complete in 5m56.5464295s
	I0717 22:57:39.220095   53870 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:57:39.220173   53870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:57:39.221940   53870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:57:39.222201   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:57:39.222371   53870 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:57:39.222458   53870 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-332820"
	I0717 22:57:39.222474   53870 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-332820"
	W0717 22:57:39.222486   53870 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:57:39.222517   53870 config.go:182] Loaded profile config "old-k8s-version-332820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 22:57:39.222533   53870 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-332820"
	I0717 22:57:39.222544   53870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-332820"
	I0717 22:57:39.222528   53870 host.go:66] Checking if "old-k8s-version-332820" exists ...
	I0717 22:57:39.222906   53870 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-332820"
	I0717 22:57:39.222947   53870 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-332820"
	I0717 22:57:39.222955   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.222965   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.222978   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.222989   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0717 22:57:39.222958   53870 addons.go:240] addon metrics-server should already be in state true
	I0717 22:57:39.223266   53870 host.go:66] Checking if "old-k8s-version-332820" exists ...
	I0717 22:57:39.223611   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.223644   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.241834   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0717 22:57:39.242161   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0717 22:57:39.242290   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0717 22:57:39.242409   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.242525   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.242699   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.242983   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.242995   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.243079   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.243085   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.243146   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.243152   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.243455   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.243499   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.243923   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.243955   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.244114   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.244145   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.244609   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.244636   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.264113   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38423
	I0717 22:57:39.264664   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.265196   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.265217   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.265738   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.265990   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.267754   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:57:39.269600   53870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:57:39.269649   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37175
	I0717 22:57:39.271155   53870 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:57:39.271170   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:57:39.271196   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:57:39.271008   53870 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-332820"
	W0717 22:57:39.271246   53870 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:57:39.271278   53870 host.go:66] Checking if "old-k8s-version-332820" exists ...
	I0717 22:57:39.271539   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.271564   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.271582   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.272088   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.272112   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.272450   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.272628   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.275001   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:57:39.276178   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.276580   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:57:39.276603   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.276866   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:57:39.277046   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:57:39.277173   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:57:39.277284   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:57:39.279594   53870 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:57:39.281161   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:57:39.281178   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:57:39.281197   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:57:39.284664   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.285093   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:57:39.285126   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.285323   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:57:39.285486   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:57:39.285624   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:57:39.285731   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:57:39.291470   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0717 22:57:39.291955   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.292486   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.292509   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.292896   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.293409   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.293446   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.310134   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35331
	I0717 22:57:39.310626   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.311202   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.311227   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.311758   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.311947   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.314218   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:57:39.314495   53870 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:57:39.314506   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:57:39.314520   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:57:39.317813   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.321612   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:57:39.321659   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:57:39.321685   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.321771   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:57:39.321872   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:57:39.321963   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:57:39.410805   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:57:39.448115   53870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:57:39.468015   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:57:39.468044   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:57:39.510209   53870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:57:39.542977   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:57:39.543006   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:57:39.621799   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:57:39.621830   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:57:39.695813   53870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:57:39.820255   53870 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-332820" context rescaled to 1 replicas
	I0717 22:57:39.820293   53870 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.149 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:57:39.822441   53870 out.go:177] * Verifying Kubernetes components...
	I0717 22:57:39.824136   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:57:40.366843   53870 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0717 22:57:40.692359   53870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.244194312s)
	I0717 22:57:40.692412   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692417   53870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18217225s)
	I0717 22:57:40.692451   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692463   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.692427   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.692926   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.692941   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.692955   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.692961   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.692966   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692971   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.692977   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.692982   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692993   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.693346   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.693347   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.693360   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.693377   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.693379   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.693390   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.693391   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.693402   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.693727   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.695361   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.695382   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:41.360399   53870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.664534201s)
	I0717 22:57:41.360444   53870 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.536280934s)
	I0717 22:57:41.360477   53870 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-332820" to be "Ready" ...
	I0717 22:57:41.360484   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:41.360603   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:41.360912   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:41.360959   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:41.360976   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:41.360986   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:41.361000   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:41.361267   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:41.361323   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:41.361335   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:41.361350   53870 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-332820"
	I0717 22:57:41.364209   53870 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 22:57:39.128970   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:41.129335   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:41.365698   53870 addons.go:502] enable addons completed in 2.143322329s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 22:57:41.370307   53870 node_ready.go:49] node "old-k8s-version-332820" has status "Ready":"True"
	I0717 22:57:41.370334   53870 node_ready.go:38] duration metric: took 9.838563ms waiting for node "old-k8s-version-332820" to be "Ready" ...
	I0717 22:57:41.370345   53870 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:57:41.477919   53870 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:38.229186   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:40.229347   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:42.730552   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:43.627986   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:46.126930   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:43.515865   53870 pod_ready.go:102] pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:44.011451   53870 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-pjn9n" not found
	I0717 22:57:44.011475   53870 pod_ready.go:81] duration metric: took 2.533523466s waiting for pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace to be "Ready" ...
	E0717 22:57:44.011483   53870 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-pjn9n" not found
	I0717 22:57:44.011490   53870 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:46.023775   53870 pod_ready.go:102] pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:45.229105   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:47.727715   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:48.128141   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:50.628216   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:48.523241   53870 pod_ready.go:102] pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:50.024098   53870 pod_ready.go:92] pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:50.024118   53870 pod_ready.go:81] duration metric: took 6.012622912s waiting for pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:50.024129   53870 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dpnlw" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:50.029960   53870 pod_ready.go:92] pod "kube-proxy-dpnlw" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:50.029976   53870 pod_ready.go:81] duration metric: took 5.842404ms waiting for pod "kube-proxy-dpnlw" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:50.029985   53870 pod_ready.go:38] duration metric: took 8.659630061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:57:50.029998   53870 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:57:50.030036   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:57:50.046609   53870 api_server.go:72] duration metric: took 10.226287152s to wait for apiserver process to appear ...
	I0717 22:57:50.046634   53870 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:57:50.046654   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:57:50.053143   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 200:
	ok
	I0717 22:57:50.054242   53870 api_server.go:141] control plane version: v1.16.0
	I0717 22:57:50.054259   53870 api_server.go:131] duration metric: took 7.618888ms to wait for apiserver health ...
	I0717 22:57:50.054265   53870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:57:50.059517   53870 system_pods.go:59] 4 kube-system pods found
	I0717 22:57:50.059537   53870 system_pods.go:61] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.059542   53870 system_pods.go:61] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.059550   53870 system_pods.go:61] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.059559   53870 system_pods.go:61] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.059567   53870 system_pods.go:74] duration metric: took 5.296559ms to wait for pod list to return data ...
	I0717 22:57:50.059575   53870 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:57:50.062619   53870 default_sa.go:45] found service account: "default"
	I0717 22:57:50.062636   53870 default_sa.go:55] duration metric: took 3.055001ms for default service account to be created ...
	I0717 22:57:50.062643   53870 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:57:50.066927   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:50.066960   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.066969   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.066978   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.066987   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.067003   53870 retry.go:31] will retry after 260.087226ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:50.331854   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:50.331881   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.331886   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.331893   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.331899   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.331914   53870 retry.go:31] will retry after 352.733578ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:50.689437   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:50.689470   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.689478   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.689489   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.689497   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.689531   53870 retry.go:31] will retry after 448.974584ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:51.144027   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:51.144052   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:51.144057   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:51.144064   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:51.144072   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:51.144084   53870 retry.go:31] will retry after 388.759143ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:51.538649   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:51.538681   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:51.538690   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:51.538701   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:51.538709   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:51.538726   53870 retry.go:31] will retry after 516.772578ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:52.061223   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:52.061251   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:52.061257   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:52.061264   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:52.061270   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:52.061284   53870 retry.go:31] will retry after 640.645684ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:52.706812   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:52.706841   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:52.706848   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:52.706857   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:52.706865   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:52.706881   53870 retry.go:31] will retry after 800.353439ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:49.728135   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:51.729859   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:53.128948   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:55.628153   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:53.512660   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:53.512702   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:53.512710   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:53.512720   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:53.512729   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:53.512746   53870 retry.go:31] will retry after 1.135974065s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:54.653983   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:54.654008   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:54.654013   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:54.654021   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:54.654027   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:54.654040   53870 retry.go:31] will retry after 1.807970353s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:56.466658   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:56.466685   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:56.466690   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:56.466697   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:56.466703   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:56.466717   53870 retry.go:31] will retry after 1.738235237s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:53.729966   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:56.229195   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:58.130852   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:00.627290   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:58.210259   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:58.210286   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:58.210291   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:58.210298   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:58.210304   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:58.210318   53870 retry.go:31] will retry after 2.588058955s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:00.805164   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:00.805189   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:00.805195   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:00.805204   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:00.805212   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:00.805229   53870 retry.go:31] will retry after 2.395095199s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:58.230452   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:00.730302   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:02.627408   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:05.127023   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:03.205614   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:03.205641   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:03.205646   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:03.205654   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:03.205661   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:03.205673   53870 retry.go:31] will retry after 3.552797061s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:06.765112   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:06.765169   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:06.765189   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:06.765202   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:06.765211   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:06.765229   53870 retry.go:31] will retry after 3.62510644s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:03.229254   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:05.729500   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:07.627727   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:10.127545   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:10.396156   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:10.396185   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:10.396193   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:10.396202   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:10.396210   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:10.396234   53870 retry.go:31] will retry after 7.05504218s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:08.230115   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:10.729252   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:12.729814   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:12.627688   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:14.629102   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:17.126975   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:17.458031   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:17.458055   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:17.458060   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:17.458067   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:17.458072   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:17.458085   53870 retry.go:31] will retry after 7.079137896s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:15.228577   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:17.229657   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:19.127827   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:21.627879   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:19.733111   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:22.229170   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:24.128551   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:26.627380   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:24.542750   53870 system_pods.go:86] 5 kube-system pods found
	I0717 22:58:24.542779   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:24.542785   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:58:24.542789   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:24.542796   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:24.542801   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:24.542814   53870 retry.go:31] will retry after 10.245831604s: missing components: etcd, kube-apiserver, kube-scheduler
	I0717 22:58:24.729548   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:27.228785   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:28.627425   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:30.627791   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:29.728922   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:31.729450   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:32.628481   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:35.127509   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:37.128620   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:34.794623   53870 system_pods.go:86] 6 kube-system pods found
	I0717 22:58:34.794652   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:34.794658   53870 system_pods.go:89] "kube-apiserver-old-k8s-version-332820" [a92ec810-2496-4702-96cb-b99972aa0907] Running
	I0717 22:58:34.794662   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:58:34.794666   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:34.794673   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:34.794678   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:34.794692   53870 retry.go:31] will retry after 13.54688256s: missing components: etcd, kube-scheduler
	I0717 22:58:33.732071   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:36.230099   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:39.627130   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:41.628484   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:38.230167   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:40.728553   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:42.730438   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:44.129730   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:46.130222   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:45.228042   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:47.230684   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:48.627207   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:51.127809   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:48.348380   53870 system_pods.go:86] 8 kube-system pods found
	I0717 22:58:48.348409   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:48.348415   53870 system_pods.go:89] "etcd-old-k8s-version-332820" [2182326c-a489-44f6-a2bb-4d238d500cd4] Pending
	I0717 22:58:48.348419   53870 system_pods.go:89] "kube-apiserver-old-k8s-version-332820" [a92ec810-2496-4702-96cb-b99972aa0907] Running
	I0717 22:58:48.348424   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:58:48.348429   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:48.348433   53870 system_pods.go:89] "kube-scheduler-old-k8s-version-332820" [6145ebf3-1505-4eee-be83-b473b2d6eb16] Running
	I0717 22:58:48.348440   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:48.348448   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:48.348460   53870 retry.go:31] will retry after 11.748298579s: missing components: etcd
	I0717 22:58:49.730893   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:51.731624   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:53.131814   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:55.628315   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:54.229398   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:56.232954   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:00.104576   53870 system_pods.go:86] 8 kube-system pods found
	I0717 22:59:00.104603   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:59:00.104609   53870 system_pods.go:89] "etcd-old-k8s-version-332820" [2182326c-a489-44f6-a2bb-4d238d500cd4] Running
	I0717 22:59:00.104613   53870 system_pods.go:89] "kube-apiserver-old-k8s-version-332820" [a92ec810-2496-4702-96cb-b99972aa0907] Running
	I0717 22:59:00.104618   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:59:00.104622   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:59:00.104626   53870 system_pods.go:89] "kube-scheduler-old-k8s-version-332820" [6145ebf3-1505-4eee-be83-b473b2d6eb16] Running
	I0717 22:59:00.104632   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:59:00.104638   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:59:00.104646   53870 system_pods.go:126] duration metric: took 1m10.041998574s to wait for k8s-apps to be running ...
	I0717 22:59:00.104654   53870 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:59:00.104712   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:59:00.127311   53870 system_svc.go:56] duration metric: took 22.647393ms WaitForService to wait for kubelet.
	I0717 22:59:00.127340   53870 kubeadm.go:581] duration metric: took 1m20.307024254s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:59:00.127365   53870 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:59:00.131417   53870 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:59:00.131440   53870 node_conditions.go:123] node cpu capacity is 2
	I0717 22:59:00.131451   53870 node_conditions.go:105] duration metric: took 4.081643ms to run NodePressure ...
	I0717 22:59:00.131462   53870 start.go:228] waiting for startup goroutines ...
	I0717 22:59:00.131468   53870 start.go:233] waiting for cluster config update ...
	I0717 22:59:00.131478   53870 start.go:242] writing updated cluster config ...
	I0717 22:59:00.131776   53870 ssh_runner.go:195] Run: rm -f paused
	I0717 22:59:00.183048   53870 start.go:578] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0717 22:59:00.184945   53870 out.go:177] 
	W0717 22:59:00.186221   53870 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0717 22:59:00.187477   53870 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0717 22:59:00.188679   53870 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-332820" cluster and "default" namespace by default
	I0717 22:58:57.628894   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:59.629684   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:02.128694   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:58.730891   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:00.731091   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:04.627812   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:06.628434   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:03.230847   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:05.728807   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:07.728897   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:08.630065   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:11.128988   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:09.729866   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:12.229160   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:13.627995   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:16.128000   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:14.728745   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:16.733743   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:18.131709   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:20.628704   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:19.234979   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:21.730483   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:22.629821   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:25.127417   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:27.127827   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:24.229123   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:26.728729   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:29.629594   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:32.126711   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:28.729318   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:30.729924   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:32.731713   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:34.627629   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:37.128939   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:35.228008   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:37.233675   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:39.628990   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:41.629614   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:39.729052   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:41.730060   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:44.127514   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:46.128048   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:44.228115   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:46.229857   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:48.128761   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:50.631119   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:48.728917   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:50.730222   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:52.731295   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:53.127276   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:55.127950   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:57.128481   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:55.228655   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:57.228813   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:59.626761   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:01.628045   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:59.229493   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:01.230143   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:04.127371   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:06.128098   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:03.728770   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:06.228708   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:08.128197   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:10.626883   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:08.229060   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:10.727573   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:12.730410   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:12.628273   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:14.629361   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:17.127148   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:13.822400   54248 pod_ready.go:81] duration metric: took 4m0.000761499s waiting for pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace to be "Ready" ...
	E0717 23:00:13.822430   54248 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 23:00:13.822438   54248 pod_ready.go:38] duration metric: took 4m2.778910042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 23:00:13.822455   54248 api_server.go:52] waiting for apiserver process to appear ...
	I0717 23:00:13.822482   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:13.822546   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:13.868846   54248 cri.go:89] found id: "50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:13.868873   54248 cri.go:89] found id: ""
	I0717 23:00:13.868884   54248 logs.go:284] 1 containers: [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c]
	I0717 23:00:13.868951   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.873997   54248 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:13.874077   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:13.904386   54248 cri.go:89] found id: "e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:13.904415   54248 cri.go:89] found id: ""
	I0717 23:00:13.904425   54248 logs.go:284] 1 containers: [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2]
	I0717 23:00:13.904486   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.909075   54248 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:13.909127   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:13.940628   54248 cri.go:89] found id: "828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:13.940657   54248 cri.go:89] found id: ""
	I0717 23:00:13.940667   54248 logs.go:284] 1 containers: [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594]
	I0717 23:00:13.940721   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.945076   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:13.945132   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:13.976589   54248 cri.go:89] found id: "a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:13.976612   54248 cri.go:89] found id: ""
	I0717 23:00:13.976621   54248 logs.go:284] 1 containers: [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e]
	I0717 23:00:13.976684   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.981163   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:13.981231   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:14.018277   54248 cri.go:89] found id: "5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:14.018298   54248 cri.go:89] found id: ""
	I0717 23:00:14.018308   54248 logs.go:284] 1 containers: [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea]
	I0717 23:00:14.018370   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:14.022494   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:14.022557   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:14.055302   54248 cri.go:89] found id: "0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:14.055327   54248 cri.go:89] found id: ""
	I0717 23:00:14.055336   54248 logs.go:284] 1 containers: [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135]
	I0717 23:00:14.055388   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:14.059980   54248 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:14.060041   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:14.092467   54248 cri.go:89] found id: ""
	I0717 23:00:14.092495   54248 logs.go:284] 0 containers: []
	W0717 23:00:14.092505   54248 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:14.092512   54248 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:14.092570   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:14.127348   54248 cri.go:89] found id: "9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:14.127370   54248 cri.go:89] found id: ""
	I0717 23:00:14.127383   54248 logs.go:284] 1 containers: [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87]
	I0717 23:00:14.127438   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:14.132646   54248 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:14.132673   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:14.147882   54248 logs.go:123] Gathering logs for kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] ...
	I0717 23:00:14.147911   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:14.198417   54248 logs.go:123] Gathering logs for etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] ...
	I0717 23:00:14.198466   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:14.244734   54248 logs.go:123] Gathering logs for kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] ...
	I0717 23:00:14.244775   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:14.287920   54248 logs.go:123] Gathering logs for storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] ...
	I0717 23:00:14.287956   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:14.333785   54248 logs.go:123] Gathering logs for container status ...
	I0717 23:00:14.333820   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:14.378892   54248 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:14.378930   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 23:00:14.482292   54248 logs.go:123] Gathering logs for coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] ...
	I0717 23:00:14.482332   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:14.525418   54248 logs.go:123] Gathering logs for kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] ...
	I0717 23:00:14.525445   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:14.562013   54248 logs.go:123] Gathering logs for kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] ...
	I0717 23:00:14.562050   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:14.609917   54248 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:14.609955   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:15.088465   54248 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:15.088502   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:17.743963   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:00:17.761437   54248 api_server.go:72] duration metric: took 4m9.176341685s to wait for apiserver process to appear ...
	I0717 23:00:17.761464   54248 api_server.go:88] waiting for apiserver healthz status ...
	I0717 23:00:17.761499   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:17.761569   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:17.796097   54248 cri.go:89] found id: "50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:17.796126   54248 cri.go:89] found id: ""
	I0717 23:00:17.796136   54248 logs.go:284] 1 containers: [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c]
	I0717 23:00:17.796194   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.800256   54248 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:17.800318   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:17.830519   54248 cri.go:89] found id: "e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:17.830540   54248 cri.go:89] found id: ""
	I0717 23:00:17.830549   54248 logs.go:284] 1 containers: [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2]
	I0717 23:00:17.830597   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.835086   54248 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:17.835158   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:17.869787   54248 cri.go:89] found id: "828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:17.869810   54248 cri.go:89] found id: ""
	I0717 23:00:17.869817   54248 logs.go:284] 1 containers: [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594]
	I0717 23:00:17.869865   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.874977   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:17.875042   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:17.906026   54248 cri.go:89] found id: "a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:17.906060   54248 cri.go:89] found id: ""
	I0717 23:00:17.906070   54248 logs.go:284] 1 containers: [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e]
	I0717 23:00:17.906130   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.912549   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:17.912619   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:17.945804   54248 cri.go:89] found id: "5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:17.945832   54248 cri.go:89] found id: ""
	I0717 23:00:17.945842   54248 logs.go:284] 1 containers: [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea]
	I0717 23:00:17.945892   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.950115   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:17.950170   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:17.980790   54248 cri.go:89] found id: "0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:17.980816   54248 cri.go:89] found id: ""
	I0717 23:00:17.980825   54248 logs.go:284] 1 containers: [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135]
	I0717 23:00:17.980893   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:19.127901   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:21.628419   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:17.985352   54248 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:17.987262   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:18.019763   54248 cri.go:89] found id: ""
	I0717 23:00:18.019794   54248 logs.go:284] 0 containers: []
	W0717 23:00:18.019804   54248 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:18.019812   54248 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:18.019875   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:18.052106   54248 cri.go:89] found id: "9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:18.052135   54248 cri.go:89] found id: ""
	I0717 23:00:18.052144   54248 logs.go:284] 1 containers: [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87]
	I0717 23:00:18.052192   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:18.057066   54248 logs.go:123] Gathering logs for kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] ...
	I0717 23:00:18.057093   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:18.100637   54248 logs.go:123] Gathering logs for kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] ...
	I0717 23:00:18.100672   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:18.137149   54248 logs.go:123] Gathering logs for kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] ...
	I0717 23:00:18.137176   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:18.191633   54248 logs.go:123] Gathering logs for storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] ...
	I0717 23:00:18.191679   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:18.231765   54248 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:18.231798   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:18.250030   54248 logs.go:123] Gathering logs for kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] ...
	I0717 23:00:18.250061   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:18.312833   54248 logs.go:123] Gathering logs for etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] ...
	I0717 23:00:18.312881   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:18.357152   54248 logs.go:123] Gathering logs for coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] ...
	I0717 23:00:18.357190   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:18.388834   54248 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:18.388871   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 23:00:18.491866   54248 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:18.491898   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:18.638732   54248 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:18.638761   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:19.135753   54248 logs.go:123] Gathering logs for container status ...
	I0717 23:00:19.135788   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:21.678446   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 23:00:21.684484   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 200:
	ok
	I0717 23:00:21.686359   54248 api_server.go:141] control plane version: v1.27.3
	I0717 23:00:21.686385   54248 api_server.go:131] duration metric: took 3.924913504s to wait for apiserver health ...
	I0717 23:00:21.686395   54248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 23:00:21.686420   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:21.686476   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:21.720978   54248 cri.go:89] found id: "50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:21.721002   54248 cri.go:89] found id: ""
	I0717 23:00:21.721012   54248 logs.go:284] 1 containers: [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c]
	I0717 23:00:21.721070   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.726790   54248 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:21.726860   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:21.756975   54248 cri.go:89] found id: "e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:21.757001   54248 cri.go:89] found id: ""
	I0717 23:00:21.757011   54248 logs.go:284] 1 containers: [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2]
	I0717 23:00:21.757078   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.761611   54248 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:21.761681   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:21.795689   54248 cri.go:89] found id: "828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:21.795709   54248 cri.go:89] found id: ""
	I0717 23:00:21.795716   54248 logs.go:284] 1 containers: [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594]
	I0717 23:00:21.795767   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.800172   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:21.800236   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:21.833931   54248 cri.go:89] found id: "a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:21.833957   54248 cri.go:89] found id: ""
	I0717 23:00:21.833968   54248 logs.go:284] 1 containers: [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e]
	I0717 23:00:21.834026   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.839931   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:21.840003   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:21.874398   54248 cri.go:89] found id: "5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:21.874423   54248 cri.go:89] found id: ""
	I0717 23:00:21.874432   54248 logs.go:284] 1 containers: [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea]
	I0717 23:00:21.874489   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.878922   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:21.878986   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:21.913781   54248 cri.go:89] found id: "0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:21.913812   54248 cri.go:89] found id: ""
	I0717 23:00:21.913821   54248 logs.go:284] 1 containers: [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135]
	I0717 23:00:21.913877   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.918217   54248 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:21.918284   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:21.951832   54248 cri.go:89] found id: ""
	I0717 23:00:21.951859   54248 logs.go:284] 0 containers: []
	W0717 23:00:21.951869   54248 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:21.951876   54248 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:21.951925   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:21.987514   54248 cri.go:89] found id: "9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:21.987543   54248 cri.go:89] found id: ""
	I0717 23:00:21.987553   54248 logs.go:284] 1 containers: [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87]
	I0717 23:00:21.987617   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.992144   54248 logs.go:123] Gathering logs for container status ...
	I0717 23:00:21.992164   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:22.031685   54248 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:22.031715   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:22.046652   54248 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:22.046691   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:22.191164   54248 logs.go:123] Gathering logs for kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] ...
	I0717 23:00:22.191191   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:22.233174   54248 logs.go:123] Gathering logs for kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] ...
	I0717 23:00:22.233209   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:22.279246   54248 logs.go:123] Gathering logs for kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] ...
	I0717 23:00:22.279273   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:22.330534   54248 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:22.330565   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:22.837335   54248 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:22.837382   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 23:00:22.947015   54248 logs.go:123] Gathering logs for kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] ...
	I0717 23:00:22.947073   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:22.991731   54248 logs.go:123] Gathering logs for etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] ...
	I0717 23:00:22.991768   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:23.036115   54248 logs.go:123] Gathering logs for coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] ...
	I0717 23:00:23.036146   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:23.071825   54248 logs.go:123] Gathering logs for storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] ...
	I0717 23:00:23.071860   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:25.629247   54248 system_pods.go:59] 8 kube-system pods found
	I0717 23:00:25.629277   54248 system_pods.go:61] "coredns-5d78c9869d-6ljtn" [9488690c-8407-42ce-9938-039af0fa2c4d] Running
	I0717 23:00:25.629284   54248 system_pods.go:61] "etcd-embed-certs-571296" [e6e8b5d1-b1e7-4c3d-89d7-f44a2a6aff8b] Running
	I0717 23:00:25.629291   54248 system_pods.go:61] "kube-apiserver-embed-certs-571296" [3b5f5396-d325-445c-b3af-4cc7a506143e] Running
	I0717 23:00:25.629298   54248 system_pods.go:61] "kube-controller-manager-embed-certs-571296" [e113ffeb-97bd-4b0d-a432-b58be43b295b] Running
	I0717 23:00:25.629305   54248 system_pods.go:61] "kube-proxy-xjpds" [7c074cca-2579-4a54-bf55-77bba0fbcd34] Running
	I0717 23:00:25.629311   54248 system_pods.go:61] "kube-scheduler-embed-certs-571296" [1d192365-8c7b-4367-b4b0-fe9f6f5874af] Running
	I0717 23:00:25.629320   54248 system_pods.go:61] "metrics-server-74d5c6b9c-cknmm" [d1fb930f-518d-4ff4-94fe-7743ab55ecc6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:00:25.629331   54248 system_pods.go:61] "storage-provisioner" [1138e736-ef8d-4d24-86d5-cac3f58f0dd6] Running
	I0717 23:00:25.629339   54248 system_pods.go:74] duration metric: took 3.942938415s to wait for pod list to return data ...
	I0717 23:00:25.629347   54248 default_sa.go:34] waiting for default service account to be created ...
	I0717 23:00:25.632079   54248 default_sa.go:45] found service account: "default"
	I0717 23:00:25.632105   54248 default_sa.go:55] duration metric: took 2.751332ms for default service account to be created ...
	I0717 23:00:25.632114   54248 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 23:00:25.639267   54248 system_pods.go:86] 8 kube-system pods found
	I0717 23:00:25.639297   54248 system_pods.go:89] "coredns-5d78c9869d-6ljtn" [9488690c-8407-42ce-9938-039af0fa2c4d] Running
	I0717 23:00:25.639305   54248 system_pods.go:89] "etcd-embed-certs-571296" [e6e8b5d1-b1e7-4c3d-89d7-f44a2a6aff8b] Running
	I0717 23:00:25.639312   54248 system_pods.go:89] "kube-apiserver-embed-certs-571296" [3b5f5396-d325-445c-b3af-4cc7a506143e] Running
	I0717 23:00:25.639321   54248 system_pods.go:89] "kube-controller-manager-embed-certs-571296" [e113ffeb-97bd-4b0d-a432-b58be43b295b] Running
	I0717 23:00:25.639328   54248 system_pods.go:89] "kube-proxy-xjpds" [7c074cca-2579-4a54-bf55-77bba0fbcd34] Running
	I0717 23:00:25.639335   54248 system_pods.go:89] "kube-scheduler-embed-certs-571296" [1d192365-8c7b-4367-b4b0-fe9f6f5874af] Running
	I0717 23:00:25.639345   54248 system_pods.go:89] "metrics-server-74d5c6b9c-cknmm" [d1fb930f-518d-4ff4-94fe-7743ab55ecc6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:00:25.639353   54248 system_pods.go:89] "storage-provisioner" [1138e736-ef8d-4d24-86d5-cac3f58f0dd6] Running
	I0717 23:00:25.639362   54248 system_pods.go:126] duration metric: took 7.242476ms to wait for k8s-apps to be running ...
	I0717 23:00:25.639374   54248 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 23:00:25.639426   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:00:25.654026   54248 system_svc.go:56] duration metric: took 14.646361ms WaitForService to wait for kubelet.
	I0717 23:00:25.654049   54248 kubeadm.go:581] duration metric: took 4m17.068957071s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 23:00:25.654069   54248 node_conditions.go:102] verifying NodePressure condition ...
	I0717 23:00:25.658024   54248 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 23:00:25.658049   54248 node_conditions.go:123] node cpu capacity is 2
	I0717 23:00:25.658058   54248 node_conditions.go:105] duration metric: took 3.985859ms to run NodePressure ...
	I0717 23:00:25.658069   54248 start.go:228] waiting for startup goroutines ...
	I0717 23:00:25.658074   54248 start.go:233] waiting for cluster config update ...
	I0717 23:00:25.658083   54248 start.go:242] writing updated cluster config ...
	I0717 23:00:25.658335   54248 ssh_runner.go:195] Run: rm -f paused
	I0717 23:00:25.709576   54248 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 23:00:25.711805   54248 out.go:177] * Done! kubectl is now configured to use "embed-certs-571296" cluster and "default" namespace by default
	I0717 23:00:24.128252   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:26.130357   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:28.627639   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:30.627679   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:33.128946   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:35.627313   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:37.627998   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:40.128503   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:42.629092   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:45.126773   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:47.127774   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:49.128495   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:51.628994   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:54.127925   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:56.128908   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:56.725699   54649 pod_ready.go:81] duration metric: took 4m0.000620769s waiting for pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace to be "Ready" ...
	E0717 23:00:56.725751   54649 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 23:00:56.725769   54649 pod_ready.go:38] duration metric: took 4m2.87768055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 23:00:56.725797   54649 api_server.go:52] waiting for apiserver process to appear ...
	I0717 23:00:56.725839   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:56.725908   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:56.788229   54649 cri.go:89] found id: "45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:00:56.788257   54649 cri.go:89] found id: ""
	I0717 23:00:56.788266   54649 logs.go:284] 1 containers: [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0]
	I0717 23:00:56.788337   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.793647   54649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:56.793709   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:56.828720   54649 cri.go:89] found id: "7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:00:56.828741   54649 cri.go:89] found id: ""
	I0717 23:00:56.828748   54649 logs.go:284] 1 containers: [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab]
	I0717 23:00:56.828790   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.833266   54649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:56.833339   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:56.865377   54649 cri.go:89] found id: "30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:00:56.865407   54649 cri.go:89] found id: ""
	I0717 23:00:56.865416   54649 logs.go:284] 1 containers: [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223]
	I0717 23:00:56.865478   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.870881   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:56.870944   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:56.908871   54649 cri.go:89] found id: "4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:00:56.908891   54649 cri.go:89] found id: ""
	I0717 23:00:56.908899   54649 logs.go:284] 1 containers: [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad]
	I0717 23:00:56.908952   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.913121   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:56.913171   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:56.946752   54649 cri.go:89] found id: "a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:00:56.946797   54649 cri.go:89] found id: ""
	I0717 23:00:56.946806   54649 logs.go:284] 1 containers: [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524]
	I0717 23:00:56.946864   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.951141   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:56.951216   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:56.986967   54649 cri.go:89] found id: "7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:00:56.986987   54649 cri.go:89] found id: ""
	I0717 23:00:56.986996   54649 logs.go:284] 1 containers: [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10]
	I0717 23:00:56.987039   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.993578   54649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:56.993655   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:57.030468   54649 cri.go:89] found id: ""
	I0717 23:00:57.030491   54649 logs.go:284] 0 containers: []
	W0717 23:00:57.030498   54649 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:57.030503   54649 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:57.030548   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:57.070533   54649 cri.go:89] found id: "4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:00:57.070564   54649 cri.go:89] found id: ""
	I0717 23:00:57.070574   54649 logs.go:284] 1 containers: [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6]
	I0717 23:00:57.070632   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:57.075379   54649 logs.go:123] Gathering logs for container status ...
	I0717 23:00:57.075685   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:57.121312   54649 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:57.121343   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 23:00:57.222647   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:00:57.222960   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:00:57.251443   54649 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:57.251481   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:57.266213   54649 logs.go:123] Gathering logs for coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] ...
	I0717 23:00:57.266242   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:00:57.304032   54649 logs.go:123] Gathering logs for kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] ...
	I0717 23:00:57.304058   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:00:57.342839   54649 logs.go:123] Gathering logs for storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] ...
	I0717 23:00:57.342865   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:00:57.378086   54649 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:57.378118   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:57.893299   54649 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:57.893338   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:58.043526   54649 logs.go:123] Gathering logs for kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] ...
	I0717 23:00:58.043564   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:00:58.096422   54649 logs.go:123] Gathering logs for etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] ...
	I0717 23:00:58.096452   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:00:58.141423   54649 logs.go:123] Gathering logs for kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] ...
	I0717 23:00:58.141452   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:00:58.183755   54649 logs.go:123] Gathering logs for kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] ...
	I0717 23:00:58.183792   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:00:58.239385   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:00:58.239418   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0717 23:00:58.239479   54649 out.go:239] X Problems detected in kubelet:
	W0717 23:00:58.239506   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:00:58.239522   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:00:58.239527   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:00:58.239533   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:01:08.241689   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:01:08.259063   54649 api_server.go:72] duration metric: took 4m17.020334708s to wait for apiserver process to appear ...
	I0717 23:01:08.259090   54649 api_server.go:88] waiting for apiserver healthz status ...
	I0717 23:01:08.259125   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:01:08.259186   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:01:08.289063   54649 cri.go:89] found id: "45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:08.289080   54649 cri.go:89] found id: ""
	I0717 23:01:08.289088   54649 logs.go:284] 1 containers: [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0]
	I0717 23:01:08.289146   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.293604   54649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:01:08.293668   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:01:08.323866   54649 cri.go:89] found id: "7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:08.323889   54649 cri.go:89] found id: ""
	I0717 23:01:08.323899   54649 logs.go:284] 1 containers: [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab]
	I0717 23:01:08.324251   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.330335   54649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:01:08.330405   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:01:08.380361   54649 cri.go:89] found id: "30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:08.380387   54649 cri.go:89] found id: ""
	I0717 23:01:08.380399   54649 logs.go:284] 1 containers: [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223]
	I0717 23:01:08.380458   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.384547   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:01:08.384612   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:01:08.416767   54649 cri.go:89] found id: "4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:08.416787   54649 cri.go:89] found id: ""
	I0717 23:01:08.416793   54649 logs.go:284] 1 containers: [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad]
	I0717 23:01:08.416836   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.420982   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:01:08.421031   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:01:08.451034   54649 cri.go:89] found id: "a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:08.451064   54649 cri.go:89] found id: ""
	I0717 23:01:08.451074   54649 logs.go:284] 1 containers: [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524]
	I0717 23:01:08.451126   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.455015   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:01:08.455063   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:01:08.486539   54649 cri.go:89] found id: "7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:08.486560   54649 cri.go:89] found id: ""
	I0717 23:01:08.486567   54649 logs.go:284] 1 containers: [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10]
	I0717 23:01:08.486620   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.491106   54649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:01:08.491171   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:01:08.523068   54649 cri.go:89] found id: ""
	I0717 23:01:08.523099   54649 logs.go:284] 0 containers: []
	W0717 23:01:08.523109   54649 logs.go:286] No container was found matching "kindnet"
	I0717 23:01:08.523116   54649 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:01:08.523201   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:01:08.556090   54649 cri.go:89] found id: "4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:08.556116   54649 cri.go:89] found id: ""
	I0717 23:01:08.556125   54649 logs.go:284] 1 containers: [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6]
	I0717 23:01:08.556181   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.560278   54649 logs.go:123] Gathering logs for storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] ...
	I0717 23:01:08.560301   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:08.595021   54649 logs.go:123] Gathering logs for container status ...
	I0717 23:01:08.595052   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:01:08.640723   54649 logs.go:123] Gathering logs for dmesg ...
	I0717 23:01:08.640757   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:01:08.654641   54649 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:01:08.654679   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:01:08.789999   54649 logs.go:123] Gathering logs for kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] ...
	I0717 23:01:08.790026   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:08.837387   54649 logs.go:123] Gathering logs for coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] ...
	I0717 23:01:08.837420   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:08.871514   54649 logs.go:123] Gathering logs for kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] ...
	I0717 23:01:08.871565   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:08.911626   54649 logs.go:123] Gathering logs for kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] ...
	I0717 23:01:08.911657   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:08.961157   54649 logs.go:123] Gathering logs for kubelet ...
	I0717 23:01:08.961192   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 23:01:09.040804   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:09.040992   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:09.067178   54649 logs.go:123] Gathering logs for etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] ...
	I0717 23:01:09.067213   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:09.104138   54649 logs.go:123] Gathering logs for kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] ...
	I0717 23:01:09.104170   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:09.146623   54649 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:01:09.146653   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:01:09.681092   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:09.681128   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0717 23:01:09.681200   54649 out.go:239] X Problems detected in kubelet:
	W0717 23:01:09.681217   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:09.681229   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:09.681237   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:09.681244   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:01:19.682682   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 23:01:19.688102   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 200:
	ok
	I0717 23:01:19.689304   54649 api_server.go:141] control plane version: v1.27.3
	I0717 23:01:19.689323   54649 api_server.go:131] duration metric: took 11.430226781s to wait for apiserver health ...
	I0717 23:01:19.689330   54649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 23:01:19.689349   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:01:19.689393   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:01:19.731728   54649 cri.go:89] found id: "45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:19.731748   54649 cri.go:89] found id: ""
	I0717 23:01:19.731756   54649 logs.go:284] 1 containers: [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0]
	I0717 23:01:19.731807   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.737797   54649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:01:19.737857   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:01:19.776355   54649 cri.go:89] found id: "7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:19.776377   54649 cri.go:89] found id: ""
	I0717 23:01:19.776385   54649 logs.go:284] 1 containers: [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab]
	I0717 23:01:19.776438   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.780589   54649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:01:19.780645   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:01:19.810917   54649 cri.go:89] found id: "30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:19.810938   54649 cri.go:89] found id: ""
	I0717 23:01:19.810947   54649 logs.go:284] 1 containers: [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223]
	I0717 23:01:19.811001   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.815185   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:01:19.815252   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:01:19.852138   54649 cri.go:89] found id: "4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:19.852161   54649 cri.go:89] found id: ""
	I0717 23:01:19.852170   54649 logs.go:284] 1 containers: [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad]
	I0717 23:01:19.852225   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.856947   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:01:19.857012   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:01:19.893668   54649 cri.go:89] found id: "a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:19.893695   54649 cri.go:89] found id: ""
	I0717 23:01:19.893705   54649 logs.go:284] 1 containers: [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524]
	I0717 23:01:19.893763   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.897862   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:01:19.897915   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:01:19.935000   54649 cri.go:89] found id: "7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:19.935024   54649 cri.go:89] found id: ""
	I0717 23:01:19.935033   54649 logs.go:284] 1 containers: [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10]
	I0717 23:01:19.935097   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.939417   54649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:01:19.939487   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:01:19.971266   54649 cri.go:89] found id: ""
	I0717 23:01:19.971296   54649 logs.go:284] 0 containers: []
	W0717 23:01:19.971305   54649 logs.go:286] No container was found matching "kindnet"
	I0717 23:01:19.971313   54649 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:01:19.971374   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:01:20.007281   54649 cri.go:89] found id: "4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:20.007299   54649 cri.go:89] found id: ""
	I0717 23:01:20.007306   54649 logs.go:284] 1 containers: [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6]
	I0717 23:01:20.007351   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:20.011751   54649 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:01:20.011776   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:01:20.146025   54649 logs.go:123] Gathering logs for kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] ...
	I0717 23:01:20.146052   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:20.197984   54649 logs.go:123] Gathering logs for etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] ...
	I0717 23:01:20.198014   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:20.240729   54649 logs.go:123] Gathering logs for coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] ...
	I0717 23:01:20.240765   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:20.280904   54649 logs.go:123] Gathering logs for kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] ...
	I0717 23:01:20.280931   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:20.338648   54649 logs.go:123] Gathering logs for storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] ...
	I0717 23:01:20.338679   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:20.378549   54649 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:01:20.378586   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:01:20.858716   54649 logs.go:123] Gathering logs for kubelet ...
	I0717 23:01:20.858759   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 23:01:20.944347   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:20.944538   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:20.971487   54649 logs.go:123] Gathering logs for kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] ...
	I0717 23:01:20.971520   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:21.007705   54649 logs.go:123] Gathering logs for kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] ...
	I0717 23:01:21.007736   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:21.059674   54649 logs.go:123] Gathering logs for container status ...
	I0717 23:01:21.059703   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:01:21.095693   54649 logs.go:123] Gathering logs for dmesg ...
	I0717 23:01:21.095722   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:01:21.110247   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:21.110273   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0717 23:01:21.110336   54649 out.go:239] X Problems detected in kubelet:
	W0717 23:01:21.110354   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:21.110364   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:21.110371   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:21.110379   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:01:31.121237   54649 system_pods.go:59] 8 kube-system pods found
	I0717 23:01:31.121266   54649 system_pods.go:61] "coredns-5d78c9869d-rqcjj" [9f3bc4cf-fb20-413e-b367-27bcb997ab80] Running
	I0717 23:01:31.121272   54649 system_pods.go:61] "etcd-default-k8s-diff-port-504828" [1e432373-0f87-4cda-969e-492a8b534af0] Running
	I0717 23:01:31.121280   54649 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504828" [573bd1d1-09ff-40b5-9746-0b3fa3d51f08] Running
	I0717 23:01:31.121290   54649 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504828" [c6baeefc-57b7-4710-998c-0af932d2db14] Running
	I0717 23:01:31.121299   54649 system_pods.go:61] "kube-proxy-nmtc8" [1f8a0182-d1df-4609-86d1-7695a138e32f] Running
	I0717 23:01:31.121307   54649 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504828" [df487feb-f937-4832-ad65-38718d4325c5] Running
	I0717 23:01:31.121317   54649 system_pods.go:61] "metrics-server-74d5c6b9c-j8f2f" [328c892b-7402-480b-bc29-a316c8fb7b1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:01:31.121339   54649 system_pods.go:61] "storage-provisioner" [0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1] Running
	I0717 23:01:31.121347   54649 system_pods.go:74] duration metric: took 11.432011006s to wait for pod list to return data ...
	I0717 23:01:31.121357   54649 default_sa.go:34] waiting for default service account to be created ...
	I0717 23:01:31.124377   54649 default_sa.go:45] found service account: "default"
	I0717 23:01:31.124403   54649 default_sa.go:55] duration metric: took 3.036772ms for default service account to be created ...
	I0717 23:01:31.124413   54649 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 23:01:31.131080   54649 system_pods.go:86] 8 kube-system pods found
	I0717 23:01:31.131116   54649 system_pods.go:89] "coredns-5d78c9869d-rqcjj" [9f3bc4cf-fb20-413e-b367-27bcb997ab80] Running
	I0717 23:01:31.131125   54649 system_pods.go:89] "etcd-default-k8s-diff-port-504828" [1e432373-0f87-4cda-969e-492a8b534af0] Running
	I0717 23:01:31.131132   54649 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-504828" [573bd1d1-09ff-40b5-9746-0b3fa3d51f08] Running
	I0717 23:01:31.131140   54649 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-504828" [c6baeefc-57b7-4710-998c-0af932d2db14] Running
	I0717 23:01:31.131151   54649 system_pods.go:89] "kube-proxy-nmtc8" [1f8a0182-d1df-4609-86d1-7695a138e32f] Running
	I0717 23:01:31.131158   54649 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-504828" [df487feb-f937-4832-ad65-38718d4325c5] Running
	I0717 23:01:31.131182   54649 system_pods.go:89] "metrics-server-74d5c6b9c-j8f2f" [328c892b-7402-480b-bc29-a316c8fb7b1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:01:31.131190   54649 system_pods.go:89] "storage-provisioner" [0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1] Running
	I0717 23:01:31.131204   54649 system_pods.go:126] duration metric: took 6.785139ms to wait for k8s-apps to be running ...
	I0717 23:01:31.131211   54649 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 23:01:31.131260   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:01:31.150458   54649 system_svc.go:56] duration metric: took 19.234064ms WaitForService to wait for kubelet.
	I0717 23:01:31.150495   54649 kubeadm.go:581] duration metric: took 4m39.911769992s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 23:01:31.150523   54649 node_conditions.go:102] verifying NodePressure condition ...
	I0717 23:01:31.153677   54649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 23:01:31.153700   54649 node_conditions.go:123] node cpu capacity is 2
	I0717 23:01:31.153710   54649 node_conditions.go:105] duration metric: took 3.182344ms to run NodePressure ...
	I0717 23:01:31.153720   54649 start.go:228] waiting for startup goroutines ...
	I0717 23:01:31.153726   54649 start.go:233] waiting for cluster config update ...
	I0717 23:01:31.153737   54649 start.go:242] writing updated cluster config ...
	I0717 23:01:31.153995   54649 ssh_runner.go:195] Run: rm -f paused
	I0717 23:01:31.204028   54649 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 23:01:31.207280   54649 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-504828" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 22:51:25 UTC, ends at Mon 2023-07-17 23:08:02 UTC. --
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.893351533Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1689634635077730043,StartedAt:1689634635111786597,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/etcd:3.3.15-0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[string]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c731a3514f98e74d0c0e942b30282b55/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c731a3514f98e74d0c0e942b30282b55/containers/etcd/308de654,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_etcd-old-k8s-version-332820_c731a3514f98e74d0c0e942b30282b55/etcd/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=b479f96e-f981-4604-9d79-e398810dd4b1 name=/runtime.v1alpha2.RuntimeService/ContainerSt
atus
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.893847789Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=70aea5de-0017-4c35-bbab-0e4c8b154efe name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.893951620Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1689634633652431394,StartedAt:1689634633702943664,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/kube-scheduler:v1.16.0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b3d303074fe0ca1d42a8bd9ed248df09/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b3d303074fe0ca1d42a8bd9ed248df09/containers/kube-scheduler/69c42da3,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-scheduler-old-k8s-version-332820_b3d303074fe0ca1d42a8bd9ed248df09/kube-scheduler/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=70aea5de-0017-4c35-bbab-0e4c8b154efe name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.894593498Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=d98d370d-2d54-4d52-be47-eff2a63f62c7 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.894680064Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1689634633500118152,StartedAt:1689634633562748103,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/kube-controller-manager:v1.16.0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7376ddb4f190a0ded9394063437bcb4e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7376ddb4f190a0ded9394063437bcb4e/containers/kube-controller-manager/7f7a27a0,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVA
TE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-332820_7376ddb4f190a0ded9394063437bcb4e/kube-controller-manager/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=d98d370d-2d54-4d52-be47-eff2a63f62c7 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.895114997Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=a721ecf2-9717-4d4a-b57d-669dc9776969 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.895295690Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1689634633261703126,StartedAt:1689634633348656833,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:k8s.gcr.io/kube-apiserver:v1.16.0,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:map[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e0ef24da77c8ba3e688845e562219102/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e0ef24da77c8ba3e688845e562219102/containers/kube-apiserver/0aaafd37,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-apiserver-old-k8s-version-332820_e0ef24d
a77c8ba3e688845e562219102/kube-apiserver/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=a721ecf2-9717-4d4a-b57d-669dc9776969 name=/runtime.v1alpha2.RuntimeService/ContainerStatus
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.896517234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eb1a8379-2f64-4e83-ae0f-85c5d2eb0a7e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.896583535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eb1a8379-2f64-4e83-ae0f-85c5d2eb0a7e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.896722702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eb1a8379-2f64-4e83-ae0f-85c5d2eb0a7e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.930565529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e7f1b208-b9c3-42f7-8053-f32bb9bd5801 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.930654372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e7f1b208-b9c3-42f7-8053-f32bb9bd5801 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.930812162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e7f1b208-b9c3-42f7-8053-f32bb9bd5801 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.967224748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a8eeafcc-b412-4fad-b271-d9c7d5bf8c1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.967313254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a8eeafcc-b412-4fad-b271-d9c7d5bf8c1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:01 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:01.967498933Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a8eeafcc-b412-4fad-b271-d9c7d5bf8c1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:02 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:02.003486202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=826a19a2-cf74-4707-a31c-97e38c14b0ed name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:02 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:02.003577309Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=826a19a2-cf74-4707-a31c-97e38c14b0ed name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:02 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:02.003736645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=826a19a2-cf74-4707-a31c-97e38c14b0ed name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:02 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:02.036928608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cce9ca95-ebc9-4f8f-a09b-d937e261d7ad name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:02 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:02.037032822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cce9ca95-ebc9-4f8f-a09b-d937e261d7ad name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:02 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:02.037286443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cce9ca95-ebc9-4f8f-a09b-d937e261d7ad name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:02 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:02.078457523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1cd754a4-612d-47d1-b22a-7d7b000a6c04 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:02 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:02.078552206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1cd754a4-612d-47d1-b22a-7d7b000a6c04 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:08:02 old-k8s-version-332820 crio[709]: time="2023-07-17 23:08:02.078716960Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1cd754a4-612d-47d1-b22a-7d7b000a6c04 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	62b724cfd1a63       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   96f5efbc24871
	1acb9b6c61f5f       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   cbec98d5739c9
	9f89a87992124       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   13a17920eb9da
	b5359112c46eb       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   84b5c00c0c09a
	88888fbeeecaa       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   0d0464abe6c14
	f35cc67eaadee       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   be9c23f96cb9c
	41388bef09878       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   eab3e1882343b
	
	* 
	* ==> coredns [9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e] <==
	* .:53
	2023-07-17T22:57:41.166Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-07-17T22:57:41.166Z [INFO] CoreDNS-1.6.2
	2023-07-17T22:57:41.166Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-07-17T22:58:14.004Z [INFO] plugin/reload: Running configuration MD5 = 06ff7f9bb57317d7ab02f5fb9baaa00d
	[INFO] Reloading complete
	2023-07-17T22:58:14.013Z [INFO] 127.0.0.1:37326 - 45130 "HINFO IN 6798697741476462037.7490281844572158290. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009508661s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-332820
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-332820
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=old-k8s-version-332820
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_57_23_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:57:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 23:07:19 +0000   Mon, 17 Jul 2023 22:57:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 23:07:19 +0000   Mon, 17 Jul 2023 22:57:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 23:07:19 +0000   Mon, 17 Jul 2023 22:57:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 23:07:19 +0000   Mon, 17 Jul 2023 22:57:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.149
	  Hostname:    old-k8s-version-332820
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 8eea51a4a36646208bfdf952d5c22016
	 System UUID:                8eea51a4-a366-4620-8bfd-f952d5c22016
	 Boot ID:                    f5937962-8992-4fbd-b792-6457e4896f08
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-t4d2t                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-332820                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                kube-apiserver-old-k8s-version-332820             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                kube-controller-manager-old-k8s-version-332820    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m43s
	  kube-system                kube-proxy-dpnlw                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-332820             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                metrics-server-74d5856cc6-59wx5                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-332820     Node old-k8s-version-332820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)  kubelet, old-k8s-version-332820     Node old-k8s-version-332820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet, old-k8s-version-332820     Node old-k8s-version-332820 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-332820  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jul17 22:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.083287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.653381] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.324753] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.164626] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.548718] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.575410] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.155978] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.162368] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.138267] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.264471] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[ +20.246133] systemd-fstab-generator[1026]: Ignoring "noauto" for root device
	[  +0.490550] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul17 22:52] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.389388] kauditd_printk_skb: 2 callbacks suppressed
	[Jul17 22:56] kauditd_printk_skb: 3 callbacks suppressed
	[Jul17 22:57] systemd-fstab-generator[3227]: Ignoring "noauto" for root device
	[ +39.443745] kauditd_printk_skb: 11 callbacks suppressed
	
	* 
	* ==> etcd [b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd] <==
	* 2023-07-17 22:57:15.161860 I | raft: d484739f521fd65e became follower at term 0
	2023-07-17 22:57:15.161880 I | raft: newRaft d484739f521fd65e [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-07-17 22:57:15.161903 I | raft: d484739f521fd65e became follower at term 1
	2023-07-17 22:57:15.171386 W | auth: simple token is not cryptographically signed
	2023-07-17 22:57:15.176736 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-07-17 22:57:15.178683 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-17 22:57:15.178938 I | embed: listening for metrics on http://192.168.50.149:2381
	2023-07-17 22:57:15.179558 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-17 22:57:15.180421 I | etcdserver: d484739f521fd65e as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-07-17 22:57:15.180780 I | etcdserver/membership: added member d484739f521fd65e [https://192.168.50.149:2380] to cluster 5bc15d5d2e20321
	2023-07-17 22:57:15.362429 I | raft: d484739f521fd65e is starting a new election at term 1
	2023-07-17 22:57:15.362549 I | raft: d484739f521fd65e became candidate at term 2
	2023-07-17 22:57:15.362640 I | raft: d484739f521fd65e received MsgVoteResp from d484739f521fd65e at term 2
	2023-07-17 22:57:15.362698 I | raft: d484739f521fd65e became leader at term 2
	2023-07-17 22:57:15.362722 I | raft: raft.node: d484739f521fd65e elected leader d484739f521fd65e at term 2
	2023-07-17 22:57:15.363416 I | etcdserver: setting up the initial cluster version to 3.3
	2023-07-17 22:57:15.364522 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-07-17 22:57:15.364587 I | etcdserver/api: enabled capabilities for version 3.3
	2023-07-17 22:57:15.364628 I | etcdserver: published {Name:old-k8s-version-332820 ClientURLs:[https://192.168.50.149:2379]} to cluster 5bc15d5d2e20321
	2023-07-17 22:57:15.364904 I | embed: ready to serve client requests
	2023-07-17 22:57:15.366115 I | embed: serving client requests on 192.168.50.149:2379
	2023-07-17 22:57:15.366497 I | embed: ready to serve client requests
	2023-07-17 22:57:15.367554 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-17 23:07:15.567040 I | mvcc: store.index: compact 669
	2023-07-17 23:07:15.569060 I | mvcc: finished scheduled compaction at 669 (took 1.278024ms)
	
	* 
	* ==> kernel <==
	*  23:08:02 up 16 min,  0 users,  load average: 0.29, 0.21, 0.15
	Linux old-k8s-version-332820 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4] <==
	* I0717 23:00:42.966811       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 23:00:42.967158       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 23:00:42.967354       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:00:42.967386       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:02:19.837712       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 23:02:19.838048       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 23:02:19.838130       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:02:19.838153       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:03:19.838741       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 23:03:19.839054       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 23:03:19.839148       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:03:19.839265       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:05:19.839695       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 23:05:19.839824       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 23:05:19.839903       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:05:19.839910       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:07:19.841548       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 23:07:19.841888       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 23:07:19.842010       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:07:19.842041       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7] <==
	* E0717 23:01:40.817507       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:01:54.754030       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:02:11.069826       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:02:26.756635       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:02:41.322322       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:02:58.758932       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:03:11.574332       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:03:30.761480       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:03:41.826382       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:04:02.763469       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:04:12.078956       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:04:34.765895       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:04:42.331103       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:05:06.768431       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:05:12.583731       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:05:38.769836       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:05:42.835502       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:06:10.772089       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:06:13.087578       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:06:42.774433       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:06:43.340297       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0717 23:07:13.592552       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:07:14.777579       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:07:43.845453       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:07:46.780088       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7] <==
	* W0717 22:57:42.418330       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0717 22:57:42.431239       1 node.go:135] Successfully retrieved node IP: 192.168.50.149
	I0717 22:57:42.431345       1 server_others.go:149] Using iptables Proxier.
	I0717 22:57:42.432942       1 server.go:529] Version: v1.16.0
	I0717 22:57:42.436325       1 config.go:313] Starting service config controller
	I0717 22:57:42.436429       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0717 22:57:42.436472       1 config.go:131] Starting endpoints config controller
	I0717 22:57:42.436528       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0717 22:57:42.536778       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0717 22:57:42.539326       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121] <==
	* W0717 22:57:18.835439       1 authentication.go:79] Authentication is disabled
	I0717 22:57:18.835449       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0717 22:57:18.835813       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0717 22:57:18.887948       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:57:18.889347       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 22:57:18.890911       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:57:18.891156       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 22:57:18.891327       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:57:18.891406       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 22:57:18.891470       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:57:18.891839       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 22:57:18.891951       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:57:18.892985       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:57:18.893531       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 22:57:19.892109       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:57:19.892631       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 22:57:19.894907       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:57:19.899893       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 22:57:19.901535       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:57:19.903490       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 22:57:19.906527       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:57:19.907626       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 22:57:19.908727       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:57:19.921425       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:57:19.924522       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 22:51:25 UTC, ends at Mon 2023-07-17 23:08:02 UTC. --
	Jul 17 23:03:24 old-k8s-version-332820 kubelet[3233]: E0717 23:03:24.426054    3233 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 23:03:24 old-k8s-version-332820 kubelet[3233]: E0717 23:03:24.426115    3233 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 23:03:24 old-k8s-version-332820 kubelet[3233]: E0717 23:03:24.426146    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jul 17 23:03:39 old-k8s-version-332820 kubelet[3233]: E0717 23:03:39.408094    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:03:52 old-k8s-version-332820 kubelet[3233]: E0717 23:03:52.407851    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:04:03 old-k8s-version-332820 kubelet[3233]: E0717 23:04:03.407626    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:04:17 old-k8s-version-332820 kubelet[3233]: E0717 23:04:17.407619    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:04:30 old-k8s-version-332820 kubelet[3233]: E0717 23:04:30.407594    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:04:45 old-k8s-version-332820 kubelet[3233]: E0717 23:04:45.407299    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:04:57 old-k8s-version-332820 kubelet[3233]: E0717 23:04:57.408319    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:05:08 old-k8s-version-332820 kubelet[3233]: E0717 23:05:08.408327    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:05:23 old-k8s-version-332820 kubelet[3233]: E0717 23:05:23.408288    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:05:34 old-k8s-version-332820 kubelet[3233]: E0717 23:05:34.407721    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:05:49 old-k8s-version-332820 kubelet[3233]: E0717 23:05:49.407865    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:06:02 old-k8s-version-332820 kubelet[3233]: E0717 23:06:02.407463    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:06:15 old-k8s-version-332820 kubelet[3233]: E0717 23:06:15.407991    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:06:30 old-k8s-version-332820 kubelet[3233]: E0717 23:06:30.407900    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:06:43 old-k8s-version-332820 kubelet[3233]: E0717 23:06:43.408325    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:06:57 old-k8s-version-332820 kubelet[3233]: E0717 23:06:57.408577    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:07:08 old-k8s-version-332820 kubelet[3233]: E0717 23:07:08.407849    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:07:11 old-k8s-version-332820 kubelet[3233]: E0717 23:07:11.484381    3233 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jul 17 23:07:20 old-k8s-version-332820 kubelet[3233]: E0717 23:07:20.407847    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:07:34 old-k8s-version-332820 kubelet[3233]: E0717 23:07:34.407734    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:07:48 old-k8s-version-332820 kubelet[3233]: E0717 23:07:48.407539    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:07:59 old-k8s-version-332820 kubelet[3233]: E0717 23:07:59.408320    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e] <==
	* I0717 22:57:42.703939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 22:57:42.715523       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 22:57:42.715618       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 22:57:42.727810       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 22:57:42.729276       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-332820_0a5cd2fd-2dd8-41df-91a8-6b8401e0fdf5!
	I0717 22:57:42.731052       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c0cfa39-ec6e-4c49-aca3-a84ac182f2fb", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-332820_0a5cd2fd-2dd8-41df-91a8-6b8401e0fdf5 became leader
	I0717 22:57:42.830692       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-332820_0a5cd2fd-2dd8-41df-91a8-6b8401e0fdf5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-332820 -n old-k8s-version-332820
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-332820 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-59wx5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-332820 describe pod metrics-server-74d5856cc6-59wx5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-332820 describe pod metrics-server-74d5856cc6-59wx5: exit status 1 (81.766432ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-59wx5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-332820 describe pod metrics-server-74d5856cc6-59wx5: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 23:00:31.746980   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-571296 -n embed-certs-571296
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-07-17 23:09:26.271242075 +0000 UTC m=+5330.061028078
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-571296 -n embed-certs-571296
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-571296 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-571296 logs -n 25: (1.699618826s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-431736                                 | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-482945                                        | pause-482945                 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-366864                              | cert-expiration-366864       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| delete  | -p                                                     | disable-driver-mounts-615088 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	|         | disable-driver-mounts-615088                           |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-431736 sudo                            | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-431736                                 | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-332820        | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-571296            | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-935524             | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-504828  | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-332820             | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-571296                 | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 23:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-935524                  | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504828       | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 22:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 23:01 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:47:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:47:37.527061   54649 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:47:37.527212   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:47:37.527221   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 22:47:37.527228   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:47:37.527438   54649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:47:37.527980   54649 out.go:303] Setting JSON to false
	I0717 22:47:37.528901   54649 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9010,"bootTime":1689625048,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:47:37.528964   54649 start.go:138] virtualization: kvm guest
	I0717 22:47:37.531211   54649 out.go:177] * [default-k8s-diff-port-504828] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:47:37.533158   54649 notify.go:220] Checking for updates...
	I0717 22:47:37.533188   54649 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:47:37.535650   54649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:47:37.537120   54649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:47:37.538622   54649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:47:37.540087   54649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:47:37.541460   54649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:47:37.543023   54649 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:47:37.543367   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:47:37.543410   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:47:37.557812   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0717 22:47:37.558215   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:47:37.558854   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:47:37.558880   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:47:37.559209   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:47:37.559422   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:47:37.559654   54649 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:47:37.559930   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:47:37.559964   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:47:37.574919   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I0717 22:47:37.575395   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:47:37.575884   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:47:37.575907   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:47:37.576216   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:47:37.576373   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:47:37.609134   54649 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 22:47:37.610479   54649 start.go:298] selected driver: kvm2
	I0717 22:47:37.610497   54649 start.go:880] validating driver "kvm2" against &{Name:default-k8s-diff-port-504828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:def
ault-k8s-diff-port-504828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:47:37.610629   54649 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:47:37.611264   54649 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:37.611363   54649 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 22:47:37.626733   54649 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 22:47:37.627071   54649 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 22:47:37.627102   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:47:37.627113   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:47:37.627123   54649 start_flags.go:319] config:
	{Name:default-k8s-diff-port-504828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-504828 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:47:37.627251   54649 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:37.629965   54649 out.go:177] * Starting control plane node default-k8s-diff-port-504828 in cluster default-k8s-diff-port-504828
	I0717 22:47:32.766201   54573 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:47:32.766339   54573 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/config.json ...
	I0717 22:47:32.766467   54573 cache.go:107] acquiring lock: {Name:mk01bc74ef42cddd6cd05b75ec900cb2a05e15de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766476   54573 cache.go:107] acquiring lock: {Name:mk672b2225edd60ecd8aa8e076d6e3579923204f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766504   54573 cache.go:107] acquiring lock: {Name:mk1ec8b402c7d0685d25060e32c2f651eb2916fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766539   54573 cache.go:107] acquiring lock: {Name:mkd18484b6a11488d3306ab3200047f68a7be660 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766573   54573 start.go:365] acquiring machines lock for no-preload-935524: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:47:32.766576   54573 cache.go:107] acquiring lock: {Name:mkb3015efe537f010ace1f299991daca38e60845 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766610   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0717 22:47:32.766586   54573 cache.go:107] acquiring lock: {Name:mkc8c0d0fa55ce47999adb3e73b20a24cafac7c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766637   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 exists
	I0717 22:47:32.766653   54573 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0" took 100.155µs
	I0717 22:47:32.766659   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0717 22:47:32.766648   54573 cache.go:107] acquiring lock: {Name:mke2add190f322b938de65cf40269b08b3acfca3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766656   54573 cache.go:107] acquiring lock: {Name:mk075beefd466e66915afc5543af4c3b175d5d80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766681   54573 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 187.554µs
	I0717 22:47:32.766710   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0717 22:47:32.766670   54573 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0717 22:47:32.766735   54573 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1" took 88.679µs
	I0717 22:47:32.766748   54573 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0717 22:47:32.766629   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0717 22:47:32.766763   54573 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3" took 231.824µs
	I0717 22:47:32.766771   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0717 22:47:32.766717   54573 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0717 22:47:32.766570   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 22:47:32.766780   54573 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3" took 194.904µs
	I0717 22:47:32.766790   54573 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0717 22:47:32.766787   54573 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 329.218µs
	I0717 22:47:32.766631   54573 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3" took 161.864µs
	I0717 22:47:32.766805   54573 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0717 22:47:32.766774   54573 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0717 22:47:32.766672   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0717 22:47:32.766820   54573 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3" took 238.693µs
	I0717 22:47:32.766828   54573 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0717 22:47:32.766797   54573 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 22:47:32.766834   54573 cache.go:87] Successfully saved all images to host disk.
	I0717 22:47:37.631294   54649 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:47:37.631336   54649 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 22:47:37.631348   54649 cache.go:57] Caching tarball of preloaded images
	I0717 22:47:37.631442   54649 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 22:47:37.631456   54649 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 22:47:37.631555   54649 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/config.json ...
	I0717 22:47:37.631742   54649 start.go:365] acquiring machines lock for default-k8s-diff-port-504828: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:47:37.905723   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:40.977774   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:47.057804   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:50.129875   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:56.209815   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:59.281810   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:05.361786   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:08.433822   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:14.513834   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:17.585682   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:23.665811   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:26.737819   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:32.817800   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:35.889839   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:41.969818   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:45.041851   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:51.121816   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:54.193896   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:00.273812   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:03.345848   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:09.425796   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:12.497873   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:18.577847   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:21.649767   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:27.729823   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:30.801947   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:36.881840   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:39.953832   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:46.033825   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:49.105862   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:55.185814   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:58.257881   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:50:04.337852   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:50:07.409871   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:50:10.413979   54248 start.go:369] acquired machines lock for "embed-certs-571296" in 3m17.321305769s
	I0717 22:50:10.414028   54248 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:50:10.414048   54248 fix.go:54] fixHost starting: 
	I0717 22:50:10.414400   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:50:10.414437   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:50:10.428711   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0717 22:50:10.429132   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:50:10.429628   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:50:10.429671   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:50:10.430088   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:50:10.430301   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:10.430491   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:50:10.432357   54248 fix.go:102] recreateIfNeeded on embed-certs-571296: state=Stopped err=<nil>
	I0717 22:50:10.432375   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	W0717 22:50:10.432552   54248 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:50:10.434264   54248 out.go:177] * Restarting existing kvm2 VM for "embed-certs-571296" ...
	I0717 22:50:10.411622   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:50:10.411707   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:50:10.413827   53870 machine.go:91] provisioned docker machine in 4m37.430605556s
	I0717 22:50:10.413860   53870 fix.go:56] fixHost completed within 4m37.451042302s
	I0717 22:50:10.413870   53870 start.go:83] releasing machines lock for "old-k8s-version-332820", held for 4m37.451061598s
	W0717 22:50:10.413907   53870 start.go:672] error starting host: provision: host is not running
	W0717 22:50:10.414004   53870 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 22:50:10.414014   53870 start.go:687] Will try again in 5 seconds ...
	I0717 22:50:10.435984   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Start
	I0717 22:50:10.436181   54248 main.go:141] libmachine: (embed-certs-571296) Ensuring networks are active...
	I0717 22:50:10.436939   54248 main.go:141] libmachine: (embed-certs-571296) Ensuring network default is active
	I0717 22:50:10.437252   54248 main.go:141] libmachine: (embed-certs-571296) Ensuring network mk-embed-certs-571296 is active
	I0717 22:50:10.437751   54248 main.go:141] libmachine: (embed-certs-571296) Getting domain xml...
	I0717 22:50:10.438706   54248 main.go:141] libmachine: (embed-certs-571296) Creating domain...
	I0717 22:50:10.795037   54248 main.go:141] libmachine: (embed-certs-571296) Waiting to get IP...
	I0717 22:50:10.795808   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:10.796178   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:10.796237   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:10.796156   55063 retry.go:31] will retry after 189.390538ms: waiting for machine to come up
	I0717 22:50:10.987904   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:10.988435   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:10.988466   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:10.988382   55063 retry.go:31] will retry after 260.75291ms: waiting for machine to come up
	I0717 22:50:11.250849   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:11.251279   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:11.251323   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:11.251218   55063 retry.go:31] will retry after 421.317262ms: waiting for machine to come up
	I0717 22:50:11.673813   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:11.674239   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:11.674259   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:11.674206   55063 retry.go:31] will retry after 512.64366ms: waiting for machine to come up
	I0717 22:50:12.188810   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:12.189271   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:12.189298   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:12.189222   55063 retry.go:31] will retry after 489.02322ms: waiting for machine to come up
	I0717 22:50:12.679695   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:12.680108   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:12.680137   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:12.680012   55063 retry.go:31] will retry after 589.269905ms: waiting for machine to come up
	I0717 22:50:15.415915   53870 start.go:365] acquiring machines lock for old-k8s-version-332820: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:50:13.270668   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:13.271039   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:13.271069   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:13.270984   55063 retry.go:31] will retry after 722.873214ms: waiting for machine to come up
	I0717 22:50:13.996101   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:13.996681   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:13.996711   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:13.996623   55063 retry.go:31] will retry after 1.381840781s: waiting for machine to come up
	I0717 22:50:15.379777   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:15.380169   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:15.380197   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:15.380118   55063 retry.go:31] will retry after 1.335563851s: waiting for machine to come up
	I0717 22:50:16.718113   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:16.718637   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:16.718660   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:16.718575   55063 retry.go:31] will retry after 1.96500286s: waiting for machine to come up
	I0717 22:50:18.685570   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:18.686003   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:18.686023   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:18.685960   55063 retry.go:31] will retry after 2.007114073s: waiting for machine to come up
	I0717 22:50:20.694500   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:20.694961   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:20.694984   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:20.694916   55063 retry.go:31] will retry after 3.344996038s: waiting for machine to come up
	I0717 22:50:24.043423   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:24.043777   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:24.043799   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:24.043732   55063 retry.go:31] will retry after 3.031269711s: waiting for machine to come up
	I0717 22:50:27.077029   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:27.077447   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:27.077493   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:27.077379   55063 retry.go:31] will retry after 3.787872248s: waiting for machine to come up
	I0717 22:50:32.158403   54573 start.go:369] acquired machines lock for "no-preload-935524" in 2m59.391772757s
	I0717 22:50:32.158456   54573 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:50:32.158478   54573 fix.go:54] fixHost starting: 
	I0717 22:50:32.158917   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:50:32.158960   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:50:32.177532   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0717 22:50:32.177962   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:50:32.178564   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:50:32.178596   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:50:32.178981   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:50:32.179197   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:32.179381   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:50:32.181079   54573 fix.go:102] recreateIfNeeded on no-preload-935524: state=Stopped err=<nil>
	I0717 22:50:32.181104   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	W0717 22:50:32.181273   54573 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:50:32.183782   54573 out.go:177] * Restarting existing kvm2 VM for "no-preload-935524" ...
	I0717 22:50:32.185307   54573 main.go:141] libmachine: (no-preload-935524) Calling .Start
	I0717 22:50:32.185504   54573 main.go:141] libmachine: (no-preload-935524) Ensuring networks are active...
	I0717 22:50:32.186119   54573 main.go:141] libmachine: (no-preload-935524) Ensuring network default is active
	I0717 22:50:32.186543   54573 main.go:141] libmachine: (no-preload-935524) Ensuring network mk-no-preload-935524 is active
	I0717 22:50:32.186958   54573 main.go:141] libmachine: (no-preload-935524) Getting domain xml...
	I0717 22:50:32.187647   54573 main.go:141] libmachine: (no-preload-935524) Creating domain...
	I0717 22:50:32.567258   54573 main.go:141] libmachine: (no-preload-935524) Waiting to get IP...
	I0717 22:50:32.568423   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:32.568941   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:32.569021   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:32.568937   55160 retry.go:31] will retry after 239.368857ms: waiting for machine to come up
	I0717 22:50:30.866978   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.867476   54248 main.go:141] libmachine: (embed-certs-571296) Found IP for machine: 192.168.61.179
	I0717 22:50:30.867494   54248 main.go:141] libmachine: (embed-certs-571296) Reserving static IP address...
	I0717 22:50:30.867507   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has current primary IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.867958   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "embed-certs-571296", mac: "52:54:00:e0:4c:e5", ip: "192.168.61.179"} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.867994   54248 main.go:141] libmachine: (embed-certs-571296) Reserved static IP address: 192.168.61.179
	I0717 22:50:30.868012   54248 main.go:141] libmachine: (embed-certs-571296) DBG | skip adding static IP to network mk-embed-certs-571296 - found existing host DHCP lease matching {name: "embed-certs-571296", mac: "52:54:00:e0:4c:e5", ip: "192.168.61.179"}
	I0717 22:50:30.868034   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Getting to WaitForSSH function...
	I0717 22:50:30.868052   54248 main.go:141] libmachine: (embed-certs-571296) Waiting for SSH to be available...
	I0717 22:50:30.870054   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.870366   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.870402   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.870514   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Using SSH client type: external
	I0717 22:50:30.870545   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa (-rw-------)
	I0717 22:50:30.870596   54248 main.go:141] libmachine: (embed-certs-571296) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:50:30.870623   54248 main.go:141] libmachine: (embed-certs-571296) DBG | About to run SSH command:
	I0717 22:50:30.870637   54248 main.go:141] libmachine: (embed-certs-571296) DBG | exit 0
	I0717 22:50:30.965028   54248 main.go:141] libmachine: (embed-certs-571296) DBG | SSH cmd err, output: <nil>: 
	I0717 22:50:30.965413   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetConfigRaw
	I0717 22:50:30.966103   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:30.968689   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.969031   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.969068   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.969282   54248 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/config.json ...
	I0717 22:50:30.969474   54248 machine.go:88] provisioning docker machine ...
	I0717 22:50:30.969491   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:30.969725   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetMachineName
	I0717 22:50:30.969910   54248 buildroot.go:166] provisioning hostname "embed-certs-571296"
	I0717 22:50:30.969928   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetMachineName
	I0717 22:50:30.970057   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:30.972055   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.972390   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.972416   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.972590   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:30.972732   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:30.972851   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:30.973006   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:30.973150   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:30.973572   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:30.973586   54248 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-571296 && echo "embed-certs-571296" | sudo tee /etc/hostname
	I0717 22:50:31.119085   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-571296
	
	I0717 22:50:31.119112   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.121962   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.122254   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.122287   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.122439   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.122634   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.122824   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.122969   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.123140   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:31.123581   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:31.123607   54248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-571296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-571296/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-571296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:50:31.262347   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:50:31.262373   54248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:50:31.262422   54248 buildroot.go:174] setting up certificates
	I0717 22:50:31.262431   54248 provision.go:83] configureAuth start
	I0717 22:50:31.262443   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetMachineName
	I0717 22:50:31.262717   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:31.265157   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.265555   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.265582   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.265716   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.267966   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.268299   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.268334   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.268482   54248 provision.go:138] copyHostCerts
	I0717 22:50:31.268529   54248 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:50:31.268538   54248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:50:31.268602   54248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:50:31.268686   54248 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:50:31.268698   54248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:50:31.268720   54248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:50:31.268769   54248 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:50:31.268776   54248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:50:31.268794   54248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:50:31.268837   54248 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.embed-certs-571296 san=[192.168.61.179 192.168.61.179 localhost 127.0.0.1 minikube embed-certs-571296]
	I0717 22:50:31.374737   54248 provision.go:172] copyRemoteCerts
	I0717 22:50:31.374796   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:50:31.374818   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.377344   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.377664   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.377700   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.377873   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.378063   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.378223   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.378364   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:31.474176   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:50:31.498974   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 22:50:31.522794   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:50:31.546276   54248 provision.go:86] duration metric: configureAuth took 283.830107ms
	I0717 22:50:31.546313   54248 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:50:31.546521   54248 config.go:182] Loaded profile config "embed-certs-571296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:50:31.546603   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.549119   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.549485   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.549544   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.549716   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.549898   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.550056   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.550206   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.550376   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:31.550819   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:31.550837   54248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:50:31.884933   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:50:31.884960   54248 machine.go:91] provisioned docker machine in 915.473611ms
	I0717 22:50:31.884973   54248 start.go:300] post-start starting for "embed-certs-571296" (driver="kvm2")
	I0717 22:50:31.884985   54248 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:50:31.885011   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:31.885399   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:50:31.885444   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.887965   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.888302   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.888338   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.888504   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.888710   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.888862   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.888988   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:31.983951   54248 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:50:31.988220   54248 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:50:31.988248   54248 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:50:31.988334   54248 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:50:31.988429   54248 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:50:31.988543   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:50:31.997933   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:50:32.020327   54248 start.go:303] post-start completed in 135.337882ms
	I0717 22:50:32.020353   54248 fix.go:56] fixHost completed within 21.60630369s
	I0717 22:50:32.020377   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:32.023026   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.023382   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.023415   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.023665   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:32.023873   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.024047   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.024193   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:32.024348   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:32.024722   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:32.024734   54248 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:50:32.158218   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634232.105028258
	
	I0717 22:50:32.158252   54248 fix.go:206] guest clock: 1689634232.105028258
	I0717 22:50:32.158262   54248 fix.go:219] Guest: 2023-07-17 22:50:32.105028258 +0000 UTC Remote: 2023-07-17 22:50:32.020356843 +0000 UTC m=+219.067919578 (delta=84.671415ms)
	I0717 22:50:32.158286   54248 fix.go:190] guest clock delta is within tolerance: 84.671415ms
	I0717 22:50:32.158292   54248 start.go:83] releasing machines lock for "embed-certs-571296", held for 21.74428315s
	I0717 22:50:32.158327   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.158592   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:32.161034   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.161385   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.161418   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.161609   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.162089   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.162247   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.162322   54248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:50:32.162368   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:32.162453   54248 ssh_runner.go:195] Run: cat /version.json
	I0717 22:50:32.162474   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:32.165101   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165235   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165564   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.165591   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165615   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.165637   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165688   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:32.165806   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:32.165877   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.165995   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.166172   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:32.166181   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:32.166307   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:32.166363   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:32.285102   54248 ssh_runner.go:195] Run: systemctl --version
	I0717 22:50:32.291185   54248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:50:32.437104   54248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:50:32.443217   54248 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:50:32.443291   54248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:50:32.461161   54248 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:50:32.461181   54248 start.go:466] detecting cgroup driver to use...
	I0717 22:50:32.461237   54248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:50:32.483011   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:50:32.497725   54248 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:50:32.497788   54248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:50:32.512008   54248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:50:32.532595   54248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:50:32.654303   54248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:50:32.783140   54248 docker.go:212] disabling docker service ...
	I0717 22:50:32.783209   54248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:50:32.795822   54248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:50:32.809540   54248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:50:32.923229   54248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:50:33.025589   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:50:33.039420   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:50:33.056769   54248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:50:33.056831   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.066205   54248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:50:33.066277   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.075559   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.084911   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.094270   54248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:50:33.103819   54248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:50:33.112005   54248 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:50:33.112070   54248 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:50:33.125459   54248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:50:33.134481   54248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:50:33.240740   54248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:50:33.418504   54248 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:50:33.418576   54248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:50:33.424143   54248 start.go:534] Will wait 60s for crictl version
	I0717 22:50:33.424202   54248 ssh_runner.go:195] Run: which crictl
	I0717 22:50:33.428330   54248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:50:33.465318   54248 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:50:33.465403   54248 ssh_runner.go:195] Run: crio --version
	I0717 22:50:33.516467   54248 ssh_runner.go:195] Run: crio --version
	I0717 22:50:33.569398   54248 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:50:32.810512   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:32.811060   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:32.811095   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:32.810988   55160 retry.go:31] will retry after 309.941434ms: waiting for machine to come up
	I0717 22:50:33.122633   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:33.123092   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:33.123138   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:33.123046   55160 retry.go:31] will retry after 487.561142ms: waiting for machine to come up
	I0717 22:50:33.611932   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:33.612512   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:33.612542   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:33.612485   55160 retry.go:31] will retry after 367.897327ms: waiting for machine to come up
	I0717 22:50:33.981820   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:33.982279   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:33.982326   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:33.982214   55160 retry.go:31] will retry after 630.28168ms: waiting for machine to come up
	I0717 22:50:34.614129   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:34.614625   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:34.614665   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:34.614569   55160 retry.go:31] will retry after 677.033607ms: waiting for machine to come up
	I0717 22:50:35.292873   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:35.293409   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:35.293443   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:35.293360   55160 retry.go:31] will retry after 1.011969157s: waiting for machine to come up
	I0717 22:50:36.306452   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:36.306895   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:36.306924   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:36.306836   55160 retry.go:31] will retry after 1.035213701s: waiting for machine to come up
	I0717 22:50:37.343727   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:37.344195   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:37.344227   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:37.344143   55160 retry.go:31] will retry after 1.820372185s: waiting for machine to come up
	I0717 22:50:33.571037   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:33.574233   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:33.574758   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:33.574796   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:33.575014   54248 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 22:50:33.579342   54248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:50:33.591600   54248 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:50:33.591678   54248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:50:33.625951   54248 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:50:33.626026   54248 ssh_runner.go:195] Run: which lz4
	I0717 22:50:33.630581   54248 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:50:33.635135   54248 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:50:33.635171   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 22:50:35.389650   54248 crio.go:444] Took 1.759110 seconds to copy over tarball
	I0717 22:50:35.389728   54248 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:50:39.166682   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:39.167111   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:39.167146   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:39.167068   55160 retry.go:31] will retry after 1.739687633s: waiting for machine to come up
	I0717 22:50:40.909258   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:40.909752   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:40.909784   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:40.909694   55160 retry.go:31] will retry after 2.476966629s: waiting for machine to come up
	I0717 22:50:38.336151   54248 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946397065s)
	I0717 22:50:38.336176   54248 crio.go:451] Took 2.946502 seconds to extract the tarball
	I0717 22:50:38.336184   54248 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:50:38.375618   54248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:50:38.425357   54248 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:50:38.425377   54248 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:50:38.425449   54248 ssh_runner.go:195] Run: crio config
	I0717 22:50:38.511015   54248 cni.go:84] Creating CNI manager for ""
	I0717 22:50:38.511040   54248 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:50:38.511050   54248 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:50:38.511067   54248 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.179 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-571296 NodeName:embed-certs-571296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:50:38.511213   54248 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-571296"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:50:38.511287   54248 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-571296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-571296 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:50:38.511340   54248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:50:38.522373   54248 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:50:38.522432   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:50:38.532894   54248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0717 22:50:38.550814   54248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:50:38.567038   54248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0717 22:50:38.583844   54248 ssh_runner.go:195] Run: grep 192.168.61.179	control-plane.minikube.internal$ /etc/hosts
	I0717 22:50:38.587687   54248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:50:38.600458   54248 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296 for IP: 192.168.61.179
	I0717 22:50:38.600490   54248 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:50:38.600617   54248 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:50:38.600659   54248 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:50:38.600721   54248 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/client.key
	I0717 22:50:38.600774   54248 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/apiserver.key.1b57fe25
	I0717 22:50:38.600820   54248 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/proxy-client.key
	I0717 22:50:38.600929   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:50:38.600955   54248 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:50:38.600966   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:50:38.600986   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:50:38.601017   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:50:38.601050   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:50:38.601093   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:50:38.601734   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:50:38.627490   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:50:38.654423   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:50:38.682997   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 22:50:38.712432   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:50:38.742901   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:50:38.768966   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:50:38.794778   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:50:38.819537   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:50:38.846730   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:50:38.870806   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:50:38.894883   54248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:50:38.911642   54248 ssh_runner.go:195] Run: openssl version
	I0717 22:50:38.917551   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:50:38.928075   54248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:50:38.932832   54248 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:50:38.932888   54248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:50:38.938574   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:50:38.948446   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:50:38.958543   54248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:50:38.963637   54248 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:50:38.963687   54248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:50:38.969460   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:50:38.979718   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:50:38.989796   54248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:50:38.994721   54248 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:50:38.994779   54248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:50:39.000394   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:50:39.011176   54248 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:50:39.016792   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:50:39.022959   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:50:39.029052   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:50:39.035096   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:50:39.040890   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:50:39.047007   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:50:39.053316   54248 kubeadm.go:404] StartCluster: {Name:embed-certs-571296 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-571296 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:50:39.053429   54248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:50:39.053479   54248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:50:39.082896   54248 cri.go:89] found id: ""
	I0717 22:50:39.082981   54248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:50:39.092999   54248 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:50:39.093021   54248 kubeadm.go:636] restartCluster start
	I0717 22:50:39.093076   54248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:50:39.102254   54248 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:39.103361   54248 kubeconfig.go:92] found "embed-certs-571296" server: "https://192.168.61.179:8443"
	I0717 22:50:39.105846   54248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:50:39.114751   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:39.114825   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:39.125574   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:39.626315   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:39.626406   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:39.637943   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:40.126535   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:40.126643   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:40.139075   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:40.626167   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:40.626306   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:40.638180   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:41.125818   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:41.125919   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:41.137569   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:41.625798   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:41.625900   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:41.637416   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:42.125972   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:42.126076   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:42.137316   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:42.625866   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:42.625964   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:42.637524   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:43.388908   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:43.389400   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:43.389434   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:43.389373   55160 retry.go:31] will retry after 2.639442454s: waiting for machine to come up
	I0717 22:50:46.032050   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:46.032476   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:46.032510   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:46.032419   55160 retry.go:31] will retry after 2.750548097s: waiting for machine to come up
	I0717 22:50:43.126317   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:43.126425   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:43.137978   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:43.626637   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:43.626751   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:43.638260   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:44.125834   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:44.125922   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:44.136925   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:44.626547   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:44.626647   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:44.638426   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:45.125978   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:45.126061   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:45.137496   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:45.626448   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:45.626511   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:45.638236   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:46.125776   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:46.125849   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:46.137916   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:46.626561   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:46.626674   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:46.638555   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:47.126090   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:47.126210   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:47.138092   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:47.626721   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:47.626802   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:47.637828   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:48.785507   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:48.785955   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:48.785987   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:48.785912   55160 retry.go:31] will retry after 4.05132206s: waiting for machine to come up
	I0717 22:50:48.126359   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:48.126438   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:48.137826   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:48.626413   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:48.626507   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:48.638354   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:49.114916   54248 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:50:49.114971   54248 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:50:49.114981   54248 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:50:49.115054   54248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:50:49.149465   54248 cri.go:89] found id: ""
	I0717 22:50:49.149558   54248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:50:49.165197   54248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:50:49.174386   54248 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:50:49.174452   54248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:50:49.183137   54248 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:50:49.183162   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:49.294495   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.169663   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.373276   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.485690   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.551312   54248 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:50:50.551389   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:51.066760   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:51.566423   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:52.066949   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:52.566304   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:54.227701   54649 start.go:369] acquired machines lock for "default-k8s-diff-port-504828" in 3m16.595911739s
	I0717 22:50:54.227764   54649 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:50:54.227786   54649 fix.go:54] fixHost starting: 
	I0717 22:50:54.228206   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:50:54.228246   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:50:54.245721   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0717 22:50:54.246143   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:50:54.246746   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:50:54.246783   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:50:54.247139   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:50:54.247353   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:50:54.247512   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:50:54.249590   54649 fix.go:102] recreateIfNeeded on default-k8s-diff-port-504828: state=Stopped err=<nil>
	I0717 22:50:54.249630   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	W0717 22:50:54.249835   54649 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:50:54.251932   54649 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-504828" ...
	I0717 22:50:52.838478   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.839101   54573 main.go:141] libmachine: (no-preload-935524) Found IP for machine: 192.168.39.6
	I0717 22:50:52.839120   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has current primary IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.839129   54573 main.go:141] libmachine: (no-preload-935524) Reserving static IP address...
	I0717 22:50:52.839689   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "no-preload-935524", mac: "52:54:00:dc:7e:aa", ip: "192.168.39.6"} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.839724   54573 main.go:141] libmachine: (no-preload-935524) DBG | skip adding static IP to network mk-no-preload-935524 - found existing host DHCP lease matching {name: "no-preload-935524", mac: "52:54:00:dc:7e:aa", ip: "192.168.39.6"}
	I0717 22:50:52.839737   54573 main.go:141] libmachine: (no-preload-935524) Reserved static IP address: 192.168.39.6
	I0717 22:50:52.839752   54573 main.go:141] libmachine: (no-preload-935524) Waiting for SSH to be available...
	I0717 22:50:52.839769   54573 main.go:141] libmachine: (no-preload-935524) DBG | Getting to WaitForSSH function...
	I0717 22:50:52.842402   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.842739   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.842773   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.842861   54573 main.go:141] libmachine: (no-preload-935524) DBG | Using SSH client type: external
	I0717 22:50:52.842889   54573 main.go:141] libmachine: (no-preload-935524) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa (-rw-------)
	I0717 22:50:52.842929   54573 main.go:141] libmachine: (no-preload-935524) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:50:52.842947   54573 main.go:141] libmachine: (no-preload-935524) DBG | About to run SSH command:
	I0717 22:50:52.842962   54573 main.go:141] libmachine: (no-preload-935524) DBG | exit 0
	I0717 22:50:52.942283   54573 main.go:141] libmachine: (no-preload-935524) DBG | SSH cmd err, output: <nil>: 
	I0717 22:50:52.942665   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetConfigRaw
	I0717 22:50:52.943403   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:52.946152   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.946546   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.946587   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.946823   54573 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/config.json ...
	I0717 22:50:52.947043   54573 machine.go:88] provisioning docker machine ...
	I0717 22:50:52.947062   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:52.947259   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetMachineName
	I0717 22:50:52.947411   54573 buildroot.go:166] provisioning hostname "no-preload-935524"
	I0717 22:50:52.947431   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetMachineName
	I0717 22:50:52.947556   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:52.950010   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.950364   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.950394   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.950539   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:52.950709   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:52.950849   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:52.950980   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:52.951165   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:52.951809   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:52.951831   54573 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-935524 && echo "no-preload-935524" | sudo tee /etc/hostname
	I0717 22:50:53.102629   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-935524
	
	I0717 22:50:53.102665   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.105306   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.105689   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.105724   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.105856   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.106048   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.106219   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.106362   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.106504   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:53.106886   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:53.106904   54573 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-935524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-935524/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-935524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:50:53.250601   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:50:53.250631   54573 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:50:53.250711   54573 buildroot.go:174] setting up certificates
	I0717 22:50:53.250721   54573 provision.go:83] configureAuth start
	I0717 22:50:53.250735   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetMachineName
	I0717 22:50:53.251063   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:53.253864   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.254309   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.254344   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.254513   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.256938   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.257385   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.257429   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.257534   54573 provision.go:138] copyHostCerts
	I0717 22:50:53.257595   54573 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:50:53.257607   54573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:50:53.257682   54573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:50:53.257804   54573 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:50:53.257816   54573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:50:53.257843   54573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:50:53.257929   54573 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:50:53.257938   54573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:50:53.257964   54573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:50:53.258060   54573 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.no-preload-935524 san=[192.168.39.6 192.168.39.6 localhost 127.0.0.1 minikube no-preload-935524]
	I0717 22:50:53.392234   54573 provision.go:172] copyRemoteCerts
	I0717 22:50:53.392307   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:50:53.392335   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.395139   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.395529   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.395560   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.395734   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.395932   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.396109   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.396268   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:53.495214   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:50:53.523550   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 22:50:53.552276   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:50:53.576026   54573 provision.go:86] duration metric: configureAuth took 325.291158ms
	I0717 22:50:53.576057   54573 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:50:53.576313   54573 config.go:182] Loaded profile config "no-preload-935524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:50:53.576414   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.578969   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.579363   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.579404   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.579585   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.579783   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.579943   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.580113   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.580302   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:53.580952   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:53.580979   54573 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:50:53.948696   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:50:53.948725   54573 machine.go:91] provisioned docker machine in 1.001666705s
	I0717 22:50:53.948737   54573 start.go:300] post-start starting for "no-preload-935524" (driver="kvm2")
	I0717 22:50:53.948756   54573 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:50:53.948788   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:53.949144   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:50:53.949179   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.951786   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.952221   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.952255   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.952468   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.952642   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.952863   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.953001   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:54.054995   54573 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:50:54.060431   54573 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:50:54.060455   54573 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:50:54.060524   54573 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:50:54.060624   54573 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:50:54.060737   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:50:54.072249   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:50:54.094894   54573 start.go:303] post-start completed in 146.143243ms
	I0717 22:50:54.094919   54573 fix.go:56] fixHost completed within 21.936441056s
	I0717 22:50:54.094937   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:54.097560   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.097893   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.097926   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.098153   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:54.098377   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.098561   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.098729   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:54.098899   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:54.099308   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:54.099323   54573 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:50:54.227537   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634254.168158155
	
	I0717 22:50:54.227562   54573 fix.go:206] guest clock: 1689634254.168158155
	I0717 22:50:54.227573   54573 fix.go:219] Guest: 2023-07-17 22:50:54.168158155 +0000 UTC Remote: 2023-07-17 22:50:54.094922973 +0000 UTC m=+201.463147612 (delta=73.235182ms)
	I0717 22:50:54.227598   54573 fix.go:190] guest clock delta is within tolerance: 73.235182ms
	I0717 22:50:54.227604   54573 start.go:83] releasing machines lock for "no-preload-935524", held for 22.06917115s
	I0717 22:50:54.227636   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.227891   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:54.230831   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.231223   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.231262   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.231367   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.231932   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.232109   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.232181   54573 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:50:54.232226   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:54.232322   54573 ssh_runner.go:195] Run: cat /version.json
	I0717 22:50:54.232354   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:54.235001   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235351   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235429   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.235463   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235600   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:54.235791   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.235825   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.235857   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235969   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:54.236027   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:54.236119   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.236253   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:54.236254   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:54.236392   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:54.360160   54573 ssh_runner.go:195] Run: systemctl --version
	I0717 22:50:54.367093   54573 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:50:54.523956   54573 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:50:54.531005   54573 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:50:54.531121   54573 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:50:54.548669   54573 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:50:54.548697   54573 start.go:466] detecting cgroup driver to use...
	I0717 22:50:54.548768   54573 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:50:54.564722   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:50:54.577237   54573 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:50:54.577303   54573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:50:54.590625   54573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:50:54.603897   54573 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:50:54.731958   54573 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:50:54.862565   54573 docker.go:212] disabling docker service ...
	I0717 22:50:54.862632   54573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:50:54.875946   54573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:50:54.888617   54573 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:50:54.997410   54573 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:50:55.110094   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:50:55.123729   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:50:55.144670   54573 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:50:55.144754   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.154131   54573 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:50:55.154193   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.164669   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.177189   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.189292   54573 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:50:55.204022   54573 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:50:55.212942   54573 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:50:55.213006   54573 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:50:55.232951   54573 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:50:55.246347   54573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:50:55.366491   54573 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:50:55.544250   54573 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:50:55.544336   54573 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:50:55.550952   54573 start.go:534] Will wait 60s for crictl version
	I0717 22:50:55.551021   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:55.558527   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:50:55.602591   54573 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:50:55.602687   54573 ssh_runner.go:195] Run: crio --version
	I0717 22:50:55.663719   54573 ssh_runner.go:195] Run: crio --version
	I0717 22:50:55.726644   54573 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:50:54.253440   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Start
	I0717 22:50:54.253678   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Ensuring networks are active...
	I0717 22:50:54.254444   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Ensuring network default is active
	I0717 22:50:54.254861   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Ensuring network mk-default-k8s-diff-port-504828 is active
	I0717 22:50:54.255337   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Getting domain xml...
	I0717 22:50:54.256194   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Creating domain...
	I0717 22:50:54.643844   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting to get IP...
	I0717 22:50:54.644894   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.645362   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.645465   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:54.645359   55310 retry.go:31] will retry after 296.655364ms: waiting for machine to come up
	I0717 22:50:54.943927   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.944465   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.944500   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:54.944408   55310 retry.go:31] will retry after 351.801959ms: waiting for machine to come up
	I0717 22:50:55.298164   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.298678   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.298710   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:55.298642   55310 retry.go:31] will retry after 354.726659ms: waiting for machine to come up
	I0717 22:50:55.655122   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.655582   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.655710   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:55.655633   55310 retry.go:31] will retry after 540.353024ms: waiting for machine to come up
	I0717 22:50:56.197370   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.197929   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.197963   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:56.197897   55310 retry.go:31] will retry after 602.667606ms: waiting for machine to come up
	I0717 22:50:56.802746   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.803401   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.803431   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:56.803344   55310 retry.go:31] will retry after 675.557445ms: waiting for machine to come up
	I0717 22:50:57.480002   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:57.480476   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:57.480508   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:57.480423   55310 retry.go:31] will retry after 898.307594ms: waiting for machine to come up
	I0717 22:50:55.728247   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:55.731423   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:55.731871   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:55.731910   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:55.732109   54573 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 22:50:55.736921   54573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:50:55.751844   54573 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:50:55.751895   54573 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:50:55.787286   54573 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:50:55.787316   54573 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 22:50:55.787387   54573 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:55.787398   54573 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:55.787418   54573 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:55.787450   54573 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:55.787589   54573 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:55.787602   54573 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0717 22:50:55.787630   54573 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:55.787648   54573 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:55.788865   54573 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:55.788870   54573 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0717 22:50:55.788875   54573 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:55.788919   54573 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:55.788929   54573 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:55.788869   54573 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:55.788955   54573 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:55.789279   54573 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:55.956462   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:55.959183   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:55.960353   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:55.961871   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:55.963472   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0717 22:50:55.970739   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:55.992476   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:56.099305   54573 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0717 22:50:56.099353   54573 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:56.099399   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.144906   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:56.175359   54573 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0717 22:50:56.175407   54573 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:56.175409   54573 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0717 22:50:56.175444   54573 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:56.175508   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.175549   54573 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0717 22:50:56.175452   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.175577   54573 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:56.175622   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.205829   54573 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0717 22:50:56.205877   54573 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:56.205929   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.205962   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:56.205875   54573 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0717 22:50:56.206017   54573 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:56.206039   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.230299   54573 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 22:50:56.230358   54573 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:56.230406   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.230508   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:56.230526   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:56.230585   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:56.230619   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:56.280737   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:56.280740   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0717 22:50:56.280876   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 22:50:56.346096   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0717 22:50:56.346185   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0717 22:50:56.346213   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0717 22:50:56.346257   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0717 22:50:56.346281   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0717 22:50:56.346325   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:56.346360   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0717 22:50:56.346370   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 22:50:56.346409   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 22:50:56.361471   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0717 22:50:56.361511   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0717 22:50:56.361546   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 22:50:56.361605   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 22:50:56.361606   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 22:50:56.410058   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 22:50:56.410140   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0717 22:50:56.410177   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:50:56.410222   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0717 22:50:56.410317   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0717 22:50:56.410389   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0717 22:50:53.066719   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:53.096978   54248 api_server.go:72] duration metric: took 2.545662837s to wait for apiserver process to appear ...
	I0717 22:50:53.097002   54248 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:50:53.097021   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:57.043968   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:50:57.044010   54248 api_server.go:103] status: https://192.168.61.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:50:57.544722   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:57.550687   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:50:57.550718   54248 api_server.go:103] status: https://192.168.61.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:50:58.045135   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:58.058934   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:50:58.058970   54248 api_server.go:103] status: https://192.168.61.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:50:58.544766   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:58.550628   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 200:
	ok
	I0717 22:50:58.559879   54248 api_server.go:141] control plane version: v1.27.3
	I0717 22:50:58.559912   54248 api_server.go:131] duration metric: took 5.462902985s to wait for apiserver health ...
	I0717 22:50:58.559925   54248 cni.go:84] Creating CNI manager for ""
	I0717 22:50:58.559936   54248 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:50:58.605706   54248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:50:58.380501   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:58.380825   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:58.380842   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:58.380780   55310 retry.go:31] will retry after 1.23430246s: waiting for machine to come up
	I0717 22:50:59.617145   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:59.617808   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:59.617841   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:59.617730   55310 retry.go:31] will retry after 1.214374623s: waiting for machine to come up
	I0717 22:51:00.834129   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:00.834639   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:00.834680   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:00.834594   55310 retry.go:31] will retry after 1.950432239s: waiting for machine to come up
	I0717 22:50:58.680414   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (2.318705948s)
	I0717 22:50:58.680448   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0717 22:50:58.680485   54573 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3: (2.318846109s)
	I0717 22:50:58.680525   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0717 22:50:58.680548   54573 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.270351678s)
	I0717 22:50:58.680595   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 22:50:58.680614   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 22:50:58.680674   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 22:51:01.356090   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (2.675377242s)
	I0717 22:51:01.356124   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0717 22:51:01.356174   54573 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0717 22:51:01.356232   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0717 22:50:58.607184   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:50:58.656720   54248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:50:58.740705   54248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:50:58.760487   54248 system_pods.go:59] 8 kube-system pods found
	I0717 22:50:58.760530   54248 system_pods.go:61] "coredns-5d78c9869d-pwd8q" [f8079ab4-1d34-4847-bdb9-7d0a500ed732] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:50:58.760542   54248 system_pods.go:61] "etcd-embed-certs-571296" [e2a4f2bb-a767-484f-9339-7024168bb59d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:50:58.760553   54248 system_pods.go:61] "kube-apiserver-embed-certs-571296" [313d49ba-2814-49e7-8b97-9c278fd33686] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:50:58.760600   54248 system_pods.go:61] "kube-controller-manager-embed-certs-571296" [03ede9e6-f06a-45a2-bafc-0ae24db96be8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:50:58.760720   54248 system_pods.go:61] "kube-proxy-kpt5d" [109fb9ce-61ab-46b0-aaf8-478d61c16fe9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:50:58.760754   54248 system_pods.go:61] "kube-scheduler-embed-certs-571296" [a10941b1-ac81-4224-bc9e-89228ad3d5c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:50:58.760765   54248 system_pods.go:61] "metrics-server-74d5c6b9c-jl7jl" [251ed989-12c1-49e5-bec1-114c3548c8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:50:58.760784   54248 system_pods.go:61] "storage-provisioner" [fb7f6371-8788-4037-8eaf-6dc2189102ec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 22:50:58.760795   54248 system_pods.go:74] duration metric: took 20.068616ms to wait for pod list to return data ...
	I0717 22:50:58.760807   54248 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:50:58.777293   54248 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:50:58.777328   54248 node_conditions.go:123] node cpu capacity is 2
	I0717 22:50:58.777343   54248 node_conditions.go:105] duration metric: took 16.528777ms to run NodePressure ...
	I0717 22:50:58.777364   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:59.270627   54248 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:50:59.277045   54248 kubeadm.go:787] kubelet initialised
	I0717 22:50:59.277074   54248 kubeadm.go:788] duration metric: took 6.413321ms waiting for restarted kubelet to initialise ...
	I0717 22:50:59.277083   54248 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:50:59.285338   54248 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:01.304495   54248 pod_ready.go:102] pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:02.787568   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:02.788090   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:02.788118   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:02.788031   55310 retry.go:31] will retry after 2.897894179s: waiting for machine to come up
	I0717 22:51:05.687387   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:05.687774   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:05.687816   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:05.687724   55310 retry.go:31] will retry after 3.029953032s: waiting for machine to come up
	I0717 22:51:02.822684   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.466424442s)
	I0717 22:51:02.822717   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0717 22:51:02.822741   54573 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0717 22:51:02.822790   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0717 22:51:03.306481   54248 pod_ready.go:102] pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:04.302530   54248 pod_ready.go:92] pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:04.302560   54248 pod_ready.go:81] duration metric: took 5.01718551s waiting for pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:04.302573   54248 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:06.320075   54248 pod_ready.go:102] pod "etcd-embed-certs-571296" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:08.719593   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:08.720084   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:08.720116   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:08.720015   55310 retry.go:31] will retry after 3.646843477s: waiting for machine to come up
	I0717 22:51:12.370696   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.371189   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Found IP for machine: 192.168.72.118
	I0717 22:51:12.371225   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has current primary IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.371237   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Reserving static IP address...
	I0717 22:51:12.371698   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-504828", mac: "52:54:00:28:6f:f7", ip: "192.168.72.118"} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.371729   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Reserved static IP address: 192.168.72.118
	I0717 22:51:12.371747   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | skip adding static IP to network mk-default-k8s-diff-port-504828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-504828", mac: "52:54:00:28:6f:f7", ip: "192.168.72.118"}
	I0717 22:51:12.371759   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for SSH to be available...
	I0717 22:51:12.371774   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Getting to WaitForSSH function...
	I0717 22:51:12.374416   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.374804   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.374839   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.374958   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Using SSH client type: external
	I0717 22:51:12.375000   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa (-rw-------)
	I0717 22:51:12.375056   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:51:12.375078   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | About to run SSH command:
	I0717 22:51:12.375103   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | exit 0
	I0717 22:51:12.461844   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | SSH cmd err, output: <nil>: 
	I0717 22:51:12.462190   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetConfigRaw
	I0717 22:51:12.462878   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:12.465698   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.466129   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.466171   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.466432   54649 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/config.json ...
	I0717 22:51:12.466686   54649 machine.go:88] provisioning docker machine ...
	I0717 22:51:12.466713   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:12.466932   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetMachineName
	I0717 22:51:12.467149   54649 buildroot.go:166] provisioning hostname "default-k8s-diff-port-504828"
	I0717 22:51:12.467174   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetMachineName
	I0717 22:51:12.467336   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.469892   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.470309   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.470347   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.470539   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:12.470711   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.470906   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.471075   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:12.471251   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:12.471709   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:12.471728   54649 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-504828 && echo "default-k8s-diff-port-504828" | sudo tee /etc/hostname
	I0717 22:51:10.226119   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.403300342s)
	I0717 22:51:10.226147   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0717 22:51:10.226176   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 22:51:10.226231   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 22:51:12.580664   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.354394197s)
	I0717 22:51:12.580698   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0717 22:51:12.580729   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 22:51:12.580786   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 22:51:08.320182   54248 pod_ready.go:92] pod "etcd-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:08.320212   54248 pod_ready.go:81] duration metric: took 4.017631268s waiting for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:08.320225   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:08.327865   54248 pod_ready.go:92] pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:08.327901   54248 pod_ready.go:81] duration metric: took 7.613771ms waiting for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:08.327916   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:10.343489   54248 pod_ready.go:102] pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:11.344309   54248 pod_ready.go:92] pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:11.344328   54248 pod_ready.go:81] duration metric: took 3.016404448s waiting for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.344338   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kpt5d" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.353150   54248 pod_ready.go:92] pod "kube-proxy-kpt5d" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:11.353174   54248 pod_ready.go:81] duration metric: took 8.829647ms waiting for pod "kube-proxy-kpt5d" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.353183   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.360223   54248 pod_ready.go:92] pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:11.360242   54248 pod_ready.go:81] duration metric: took 7.0537ms waiting for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.360251   54248 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:13.630627   53870 start.go:369] acquired machines lock for "old-k8s-version-332820" in 58.214644858s
	I0717 22:51:13.630698   53870 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:51:13.630705   53870 fix.go:54] fixHost starting: 
	I0717 22:51:13.631117   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:13.631153   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:13.651676   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38349
	I0717 22:51:13.652152   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:13.652820   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:51:13.652841   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:13.653180   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:13.653679   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:13.653832   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:51:13.656911   53870 fix.go:102] recreateIfNeeded on old-k8s-version-332820: state=Stopped err=<nil>
	I0717 22:51:13.656944   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	W0717 22:51:13.657151   53870 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:51:13.659194   53870 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-332820" ...
	I0717 22:51:12.607198   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504828
	
	I0717 22:51:12.607256   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.610564   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.611073   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.611139   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.611470   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:12.611707   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.611918   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.612080   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:12.612267   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:12.612863   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:12.612897   54649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-504828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-504828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-504828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:51:12.749133   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:51:12.749159   54649 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:51:12.749187   54649 buildroot.go:174] setting up certificates
	I0717 22:51:12.749198   54649 provision.go:83] configureAuth start
	I0717 22:51:12.749211   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetMachineName
	I0717 22:51:12.749475   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:12.752199   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.752608   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.752637   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.752753   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.754758   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.755095   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.755142   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.755255   54649 provision.go:138] copyHostCerts
	I0717 22:51:12.755313   54649 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:51:12.755328   54649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:51:12.755393   54649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:51:12.755503   54649 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:51:12.755516   54649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:51:12.755547   54649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:51:12.755615   54649 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:51:12.755626   54649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:51:12.755649   54649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:51:12.755708   54649 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-504828 san=[192.168.72.118 192.168.72.118 localhost 127.0.0.1 minikube default-k8s-diff-port-504828]
	I0717 22:51:12.865920   54649 provision.go:172] copyRemoteCerts
	I0717 22:51:12.865978   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:51:12.865998   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.868784   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.869162   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.869196   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.869354   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:12.869551   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.869731   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:12.869864   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:12.963734   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:51:12.988925   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 22:51:13.014007   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:51:13.037974   54649 provision.go:86] duration metric: configureAuth took 288.764872ms
	I0717 22:51:13.038002   54649 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:51:13.038226   54649 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:51:13.038298   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.041038   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.041510   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.041560   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.041722   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.041928   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.042115   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.042265   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.042462   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:13.042862   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:13.042883   54649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:51:13.359789   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:51:13.359856   54649 machine.go:91] provisioned docker machine in 893.152202ms
	I0717 22:51:13.359873   54649 start.go:300] post-start starting for "default-k8s-diff-port-504828" (driver="kvm2")
	I0717 22:51:13.359885   54649 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:51:13.359909   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.360286   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:51:13.360322   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.363265   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.363637   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.363668   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.363953   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.364165   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.364336   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.364484   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:13.456030   54649 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:51:13.460504   54649 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:51:13.460539   54649 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:51:13.460610   54649 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:51:13.460711   54649 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:51:13.460824   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:51:13.469442   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:13.497122   54649 start.go:303] post-start completed in 137.230872ms
	I0717 22:51:13.497150   54649 fix.go:56] fixHost completed within 19.269364226s
	I0717 22:51:13.497196   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.500248   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.500673   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.500721   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.500872   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.501093   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.501256   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.501434   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.501602   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:13.502063   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:13.502080   54649 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:51:13.630454   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634273.570672552
	
	I0717 22:51:13.630476   54649 fix.go:206] guest clock: 1689634273.570672552
	I0717 22:51:13.630486   54649 fix.go:219] Guest: 2023-07-17 22:51:13.570672552 +0000 UTC Remote: 2023-07-17 22:51:13.49715425 +0000 UTC m=+216.001835933 (delta=73.518302ms)
	I0717 22:51:13.630534   54649 fix.go:190] guest clock delta is within tolerance: 73.518302ms
	I0717 22:51:13.630541   54649 start.go:83] releasing machines lock for "default-k8s-diff-port-504828", held for 19.402800296s
	I0717 22:51:13.630571   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.630804   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:13.633831   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.634285   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.634329   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.634496   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.635108   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.635324   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.635440   54649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:51:13.635513   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.635563   54649 ssh_runner.go:195] Run: cat /version.json
	I0717 22:51:13.635590   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.638872   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639085   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639277   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.639313   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639513   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.639711   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.639730   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.639769   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639930   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.639966   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.640133   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:13.640149   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.640293   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.640432   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:13.732117   54649 ssh_runner.go:195] Run: systemctl --version
	I0717 22:51:13.762073   54649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:51:13.920611   54649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:51:13.927492   54649 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:51:13.927552   54649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:51:13.943359   54649 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:51:13.943384   54649 start.go:466] detecting cgroup driver to use...
	I0717 22:51:13.943456   54649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:51:13.959123   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:51:13.974812   54649 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:51:13.974875   54649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:51:13.991292   54649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:51:14.006999   54649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:51:14.116763   54649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:51:14.286675   54649 docker.go:212] disabling docker service ...
	I0717 22:51:14.286747   54649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:51:14.304879   54649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:51:14.319280   54649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:51:14.436994   54649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:51:14.551392   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:51:14.564944   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:51:14.588553   54649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:51:14.588618   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.602482   54649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:51:14.602561   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.613901   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.624520   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.634941   54649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:51:14.649124   54649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:51:14.659103   54649 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:51:14.659194   54649 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:51:14.673064   54649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:51:14.684547   54649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:51:14.796698   54649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:51:15.013266   54649 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:51:15.013352   54649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:51:15.019638   54649 start.go:534] Will wait 60s for crictl version
	I0717 22:51:15.019707   54649 ssh_runner.go:195] Run: which crictl
	I0717 22:51:15.023691   54649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:51:15.079550   54649 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:51:15.079642   54649 ssh_runner.go:195] Run: crio --version
	I0717 22:51:15.149137   54649 ssh_runner.go:195] Run: crio --version
	I0717 22:51:15.210171   54649 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:51:15.211641   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:15.214746   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:15.215160   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:15.215195   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:15.215444   54649 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 22:51:15.220209   54649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:15.233265   54649 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:51:15.233336   54649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:15.278849   54649 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:51:15.278928   54649 ssh_runner.go:195] Run: which lz4
	I0717 22:51:15.284618   54649 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:51:15.289979   54649 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:51:15.290021   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 22:51:17.240790   54649 crio.go:444] Took 1.956220 seconds to copy over tarball
	I0717 22:51:17.240850   54649 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:51:14.577167   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (1.996354374s)
	I0717 22:51:14.577200   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0717 22:51:14.577239   54573 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:51:14.577288   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:51:15.749388   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.172071962s)
	I0717 22:51:15.749419   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 22:51:15.749442   54573 cache_images.go:123] Successfully loaded all cached images
	I0717 22:51:15.749448   54573 cache_images.go:92] LoadImages completed in 19.962118423s
	I0717 22:51:15.749548   54573 ssh_runner.go:195] Run: crio config
	I0717 22:51:15.830341   54573 cni.go:84] Creating CNI manager for ""
	I0717 22:51:15.830380   54573 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:15.830394   54573 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:51:15.830416   54573 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-935524 NodeName:no-preload-935524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:51:15.830609   54573 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-935524"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:51:15.830710   54573 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-935524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-935524 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:51:15.830777   54573 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:51:15.844785   54573 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:51:15.844854   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:51:15.859135   54573 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0717 22:51:15.884350   54573 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:51:15.904410   54573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0717 22:51:15.930959   54573 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0717 22:51:15.937680   54573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:15.960124   54573 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524 for IP: 192.168.39.6
	I0717 22:51:15.960169   54573 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:15.960352   54573 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:51:15.960416   54573 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:51:15.960539   54573 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.key
	I0717 22:51:15.960635   54573 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/apiserver.key.cc3bd7a5
	I0717 22:51:15.960694   54573 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/proxy-client.key
	I0717 22:51:15.960842   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:51:15.960882   54573 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:51:15.960899   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:51:15.960936   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:51:15.960973   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:51:15.961001   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:51:15.961063   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:15.961864   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:51:16.000246   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:51:16.036739   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:51:16.073916   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:51:16.110871   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:51:16.147671   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:51:16.183503   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:51:16.216441   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:51:16.251053   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:51:16.291022   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:51:16.327764   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:51:16.360870   54573 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:51:16.399760   54573 ssh_runner.go:195] Run: openssl version
	I0717 22:51:16.407720   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:51:16.423038   54573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:16.430870   54573 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:16.430933   54573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:16.441206   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:51:16.455708   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:51:16.470036   54573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:51:16.477133   54573 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:51:16.477206   54573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:51:16.485309   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:51:16.503973   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:51:16.524430   54573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:51:16.533991   54573 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:51:16.534052   54573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:51:16.544688   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:51:16.563847   54573 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:51:16.572122   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:51:16.583217   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:51:16.594130   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:51:16.606268   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:51:16.618166   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:51:16.628424   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:51:16.636407   54573 kubeadm.go:404] StartCluster: {Name:no-preload-935524 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-935524 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:51:16.636531   54573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:51:16.636616   54573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:16.677023   54573 cri.go:89] found id: ""
	I0717 22:51:16.677096   54573 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:51:16.691214   54573 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:51:16.691243   54573 kubeadm.go:636] restartCluster start
	I0717 22:51:16.691309   54573 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:51:16.705358   54573 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:16.707061   54573 kubeconfig.go:92] found "no-preload-935524" server: "https://192.168.39.6:8443"
	I0717 22:51:16.710828   54573 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:51:16.722187   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:16.722262   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:16.739474   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:17.240340   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:17.240432   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:17.255528   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:13.660641   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Start
	I0717 22:51:13.660899   53870 main.go:141] libmachine: (old-k8s-version-332820) Ensuring networks are active...
	I0717 22:51:13.661724   53870 main.go:141] libmachine: (old-k8s-version-332820) Ensuring network default is active
	I0717 22:51:13.662114   53870 main.go:141] libmachine: (old-k8s-version-332820) Ensuring network mk-old-k8s-version-332820 is active
	I0717 22:51:13.662588   53870 main.go:141] libmachine: (old-k8s-version-332820) Getting domain xml...
	I0717 22:51:13.663907   53870 main.go:141] libmachine: (old-k8s-version-332820) Creating domain...
	I0717 22:51:14.067159   53870 main.go:141] libmachine: (old-k8s-version-332820) Waiting to get IP...
	I0717 22:51:14.067897   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.068328   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.068398   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.068321   55454 retry.go:31] will retry after 239.1687ms: waiting for machine to come up
	I0717 22:51:14.309022   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.309748   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.309782   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.309696   55454 retry.go:31] will retry after 256.356399ms: waiting for machine to come up
	I0717 22:51:14.568103   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.568537   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.568572   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.568490   55454 retry.go:31] will retry after 386.257739ms: waiting for machine to come up
	I0717 22:51:14.955922   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.956518   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.956548   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.956458   55454 retry.go:31] will retry after 410.490408ms: waiting for machine to come up
	I0717 22:51:15.368904   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:15.369672   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:15.369780   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:15.369722   55454 retry.go:31] will retry after 536.865068ms: waiting for machine to come up
	I0717 22:51:15.908301   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:15.908814   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:15.908851   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:15.908774   55454 retry.go:31] will retry after 863.22272ms: waiting for machine to come up
	I0717 22:51:16.773413   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:16.773936   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:16.773971   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:16.773877   55454 retry.go:31] will retry after 858.793193ms: waiting for machine to come up
	I0717 22:51:17.634087   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:17.634588   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:17.634613   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:17.634532   55454 retry.go:31] will retry after 1.416659037s: waiting for machine to come up
	I0717 22:51:13.375358   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:15.393985   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:17.887365   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:20.250749   54649 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009864781s)
	I0717 22:51:20.250783   54649 crio.go:451] Took 3.009971 seconds to extract the tarball
	I0717 22:51:20.250793   54649 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:51:20.291666   54649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:20.341098   54649 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:51:20.341126   54649 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:51:20.341196   54649 ssh_runner.go:195] Run: crio config
	I0717 22:51:20.415138   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:51:20.415161   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:20.415171   54649 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:51:20.415185   54649 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.118 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-504828 NodeName:default-k8s-diff-port-504828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:51:20.415352   54649 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.118
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-504828"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:51:20.415432   54649 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-504828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-504828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0717 22:51:20.415488   54649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:51:20.427702   54649 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:51:20.427758   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:51:20.436950   54649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0717 22:51:20.454346   54649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:51:20.470679   54649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0717 22:51:20.491725   54649 ssh_runner.go:195] Run: grep 192.168.72.118	control-plane.minikube.internal$ /etc/hosts
	I0717 22:51:20.495952   54649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:20.511714   54649 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828 for IP: 192.168.72.118
	I0717 22:51:20.511768   54649 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:20.511949   54649 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:51:20.511997   54649 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:51:20.512100   54649 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.key
	I0717 22:51:20.512210   54649 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/apiserver.key.f316a5ec
	I0717 22:51:20.512293   54649 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/proxy-client.key
	I0717 22:51:20.512432   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:51:20.512474   54649 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:51:20.512490   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:51:20.512526   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:51:20.512563   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:51:20.512597   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:51:20.512654   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:20.513217   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:51:20.543975   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:51:20.573149   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:51:20.603536   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:51:20.632387   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:51:20.658524   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:51:20.685636   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:51:20.715849   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:51:20.746544   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:51:20.773588   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:51:20.798921   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:51:20.826004   54649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:51:20.843941   54649 ssh_runner.go:195] Run: openssl version
	I0717 22:51:20.849904   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:51:20.860510   54649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:51:20.865435   54649 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:51:20.865499   54649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:51:20.872493   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:51:20.883044   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:51:20.893448   54649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:20.898872   54649 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:20.898937   54649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:20.905231   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:51:20.915267   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:51:20.925267   54649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:51:20.929988   54649 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:51:20.930055   54649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:51:20.935935   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:51:20.945567   54649 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:51:20.950083   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:51:20.956164   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:51:20.962921   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:51:20.969329   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:51:20.975672   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:51:20.981532   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:51:20.987431   54649 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-504828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port
-504828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:51:20.987551   54649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:51:20.987640   54649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:21.020184   54649 cri.go:89] found id: ""
	I0717 22:51:21.020272   54649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:51:21.030407   54649 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:51:21.030426   54649 kubeadm.go:636] restartCluster start
	I0717 22:51:21.030484   54649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:51:21.039171   54649 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.040133   54649 kubeconfig.go:92] found "default-k8s-diff-port-504828" server: "https://192.168.72.118:8444"
	I0717 22:51:21.043010   54649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:51:21.052032   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.052083   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.063718   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.564403   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.564474   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.576250   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:22.063846   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.063915   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.077908   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:17.739595   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:17.739675   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:17.754882   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:18.240006   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:18.240109   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:18.253391   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:18.739658   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:18.739750   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:18.751666   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:19.240285   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:19.240385   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:19.254816   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:19.740338   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:19.740430   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:19.757899   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:20.240481   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:20.240561   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:20.255605   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:20.739950   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:20.740064   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:20.754552   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.240009   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.240088   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.252127   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.739671   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.739761   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.751590   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:22.239795   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.239895   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.255489   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:19.053039   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:19.053552   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:19.053577   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:19.053545   55454 retry.go:31] will retry after 1.844468395s: waiting for machine to come up
	I0717 22:51:20.899373   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:20.899955   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:20.899985   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:20.899907   55454 retry.go:31] will retry after 1.689590414s: waiting for machine to come up
	I0717 22:51:22.590651   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:22.591178   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:22.591210   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:22.591133   55454 retry.go:31] will retry after 2.006187847s: waiting for machine to come up
	I0717 22:51:20.375100   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:22.375448   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:22.564646   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.564758   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.578416   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.063819   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.063917   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.076239   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.563771   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.563906   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.577184   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.064855   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.064943   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.080926   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.563906   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.564002   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.580421   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.063993   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.064078   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.076570   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.563894   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.563978   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.575475   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.063959   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:26.064042   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:26.075498   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.564007   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:26.564068   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:26.576760   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:27.064334   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:27.064437   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:27.076567   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:22.739773   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.739859   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.752462   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.240402   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.240481   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.255896   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.740550   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.740740   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.756364   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.239721   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.239803   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.251755   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.740355   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.740455   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.751880   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.240545   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.240637   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.252165   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.739649   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.739729   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.751302   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.239861   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:26.239951   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:26.251854   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.722721   54573 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:51:26.722761   54573 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:51:26.722774   54573 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:51:26.722824   54573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:26.754496   54573 cri.go:89] found id: ""
	I0717 22:51:26.754575   54573 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:51:26.769858   54573 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:51:26.778403   54573 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:51:26.778456   54573 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:26.788782   54573 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:26.788809   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:26.926114   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:24.598549   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:24.599047   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:24.599078   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:24.598993   55454 retry.go:31] will retry after 2.77055632s: waiting for machine to come up
	I0717 22:51:27.371775   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:27.372248   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:27.372282   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:27.372196   55454 retry.go:31] will retry after 3.942088727s: waiting for machine to come up
	I0717 22:51:24.876056   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:26.876873   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:27.564363   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:27.564459   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:27.578222   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:28.063778   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:28.063883   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:28.075427   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:28.564630   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:28.564717   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:28.576903   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:29.064502   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:29.064605   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:29.075995   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:29.564295   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:29.564378   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:29.576762   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:30.063786   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:30.063870   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:30.079670   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:30.564137   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:30.564246   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:30.579055   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:31.052972   54649 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:51:31.053010   54649 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:51:31.053022   54649 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:51:31.053071   54649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:31.087580   54649 cri.go:89] found id: ""
	I0717 22:51:31.087681   54649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:51:31.103788   54649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:51:31.113570   54649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:51:31.113630   54649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:31.122993   54649 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:31.123016   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:31.254859   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:32.122277   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:32.360183   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:32.499924   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.181412   54573 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.255240525s)
	I0717 22:51:28.181446   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.398026   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.491028   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.586346   54573 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:51:28.586450   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:29.099979   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:29.599755   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:30.100095   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:30.600338   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:31.100205   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:31.129978   54573 api_server.go:72] duration metric: took 2.543631809s to wait for apiserver process to appear ...
	I0717 22:51:31.130004   54573 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:51:31.130020   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:31.316328   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.316892   53870 main.go:141] libmachine: (old-k8s-version-332820) Found IP for machine: 192.168.50.149
	I0717 22:51:31.316924   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has current primary IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.316936   53870 main.go:141] libmachine: (old-k8s-version-332820) Reserving static IP address...
	I0717 22:51:31.317425   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "old-k8s-version-332820", mac: "52:54:00:46:ca:1a", ip: "192.168.50.149"} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.317463   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | skip adding static IP to network mk-old-k8s-version-332820 - found existing host DHCP lease matching {name: "old-k8s-version-332820", mac: "52:54:00:46:ca:1a", ip: "192.168.50.149"}
	I0717 22:51:31.317486   53870 main.go:141] libmachine: (old-k8s-version-332820) Reserved static IP address: 192.168.50.149
	I0717 22:51:31.317503   53870 main.go:141] libmachine: (old-k8s-version-332820) Waiting for SSH to be available...
	I0717 22:51:31.317531   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Getting to WaitForSSH function...
	I0717 22:51:31.320209   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.320558   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.320593   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.320779   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Using SSH client type: external
	I0717 22:51:31.320810   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa (-rw-------)
	I0717 22:51:31.320862   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:51:31.320881   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | About to run SSH command:
	I0717 22:51:31.320895   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | exit 0
	I0717 22:51:31.426263   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | SSH cmd err, output: <nil>: 
	I0717 22:51:31.426659   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetConfigRaw
	I0717 22:51:31.427329   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:31.430330   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.430697   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.430739   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.431053   53870 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/config.json ...
	I0717 22:51:31.431288   53870 machine.go:88] provisioning docker machine ...
	I0717 22:51:31.431312   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:31.431531   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetMachineName
	I0717 22:51:31.431711   53870 buildroot.go:166] provisioning hostname "old-k8s-version-332820"
	I0717 22:51:31.431736   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetMachineName
	I0717 22:51:31.431959   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.434616   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.435073   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.435105   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.435246   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:31.435429   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.435578   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.435720   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:31.435889   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:31.436476   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:31.436499   53870 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-332820 && echo "old-k8s-version-332820" | sudo tee /etc/hostname
	I0717 22:51:31.589302   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-332820
	
	I0717 22:51:31.589343   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.592724   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.593180   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.593236   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.593559   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:31.593754   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.593922   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.594077   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:31.594266   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:31.594671   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:31.594696   53870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-332820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-332820/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-332820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:51:31.746218   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:51:31.746250   53870 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:51:31.746274   53870 buildroot.go:174] setting up certificates
	I0717 22:51:31.746298   53870 provision.go:83] configureAuth start
	I0717 22:51:31.746316   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetMachineName
	I0717 22:51:31.746626   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:31.750130   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.750678   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.750724   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.750781   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.753170   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.753495   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.753552   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.753654   53870 provision.go:138] copyHostCerts
	I0717 22:51:31.753715   53870 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:51:31.753728   53870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:51:31.753804   53870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:51:31.753944   53870 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:51:31.753957   53870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:51:31.753989   53870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:51:31.754072   53870 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:51:31.754085   53870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:51:31.754113   53870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:51:31.754184   53870 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-332820 san=[192.168.50.149 192.168.50.149 localhost 127.0.0.1 minikube old-k8s-version-332820]
	I0717 22:51:31.847147   53870 provision.go:172] copyRemoteCerts
	I0717 22:51:31.847203   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:51:31.847225   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.850322   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.850753   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.850810   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.851095   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:31.851414   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.851605   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:31.851784   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:31.951319   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:51:31.980515   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:51:32.010536   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 22:51:32.037399   53870 provision.go:86] duration metric: configureAuth took 291.082125ms
	I0717 22:51:32.037434   53870 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:51:32.037660   53870 config.go:182] Loaded profile config "old-k8s-version-332820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 22:51:32.037735   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.040863   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.041427   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.041534   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.041625   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.041848   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.042053   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.042225   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.042394   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:32.042812   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:32.042834   53870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:51:32.425577   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:51:32.425603   53870 machine.go:91] provisioned docker machine in 994.299178ms
	I0717 22:51:32.425615   53870 start.go:300] post-start starting for "old-k8s-version-332820" (driver="kvm2")
	I0717 22:51:32.425627   53870 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:51:32.425662   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.426023   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:51:32.426060   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.429590   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.430060   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.430087   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.430464   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.430677   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.430839   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.430955   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:32.535625   53870 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:51:32.541510   53870 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:51:32.541569   53870 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:51:32.541660   53870 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:51:32.541771   53870 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:51:32.541919   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:51:32.554113   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:32.579574   53870 start.go:303] post-start completed in 153.943669ms
	I0717 22:51:32.579597   53870 fix.go:56] fixHost completed within 18.948892402s
	I0717 22:51:32.579620   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.582411   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.582774   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.582807   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.582939   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.583181   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.583404   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.583562   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.583804   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:32.584270   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:32.584287   53870 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:51:32.727134   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634292.668672695
	
	I0717 22:51:32.727160   53870 fix.go:206] guest clock: 1689634292.668672695
	I0717 22:51:32.727171   53870 fix.go:219] Guest: 2023-07-17 22:51:32.668672695 +0000 UTC Remote: 2023-07-17 22:51:32.579600815 +0000 UTC m=+359.756107714 (delta=89.07188ms)
	I0717 22:51:32.727195   53870 fix.go:190] guest clock delta is within tolerance: 89.07188ms
	I0717 22:51:32.727201   53870 start.go:83] releasing machines lock for "old-k8s-version-332820", held for 19.096529597s
	I0717 22:51:32.727223   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.727539   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:32.730521   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.730926   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.730958   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.731115   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.731706   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.731881   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.731968   53870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:51:32.732018   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.732115   53870 ssh_runner.go:195] Run: cat /version.json
	I0717 22:51:32.732141   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.734864   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735214   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.735264   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735284   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735387   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.735561   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.735821   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.735832   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.735852   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735958   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:32.736097   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.736224   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.736329   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.736435   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:32.854136   53870 ssh_runner.go:195] Run: systemctl --version
	I0717 22:51:29.375082   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:31.376747   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:32.860997   53870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:51:33.025325   53870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:51:33.031587   53870 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:51:33.031662   53870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:51:33.046431   53870 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:51:33.046454   53870 start.go:466] detecting cgroup driver to use...
	I0717 22:51:33.046520   53870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:51:33.067265   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:51:33.079490   53870 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:51:33.079543   53870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:51:33.093639   53870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:51:33.106664   53870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:51:33.248823   53870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:51:33.414350   53870 docker.go:212] disabling docker service ...
	I0717 22:51:33.414420   53870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:51:33.428674   53870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:51:33.442140   53870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:51:33.564890   53870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:51:33.699890   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:51:33.714011   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:51:33.733726   53870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0717 22:51:33.733825   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.746603   53870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:51:33.746676   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.759291   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.772841   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.785507   53870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:51:33.798349   53870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:51:33.807468   53870 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:51:33.807578   53870 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:51:33.822587   53870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:51:33.832542   53870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:51:33.975008   53870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:51:34.192967   53870 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:51:34.193041   53870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:51:34.200128   53870 start.go:534] Will wait 60s for crictl version
	I0717 22:51:34.200194   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:34.204913   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:51:34.243900   53870 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:51:34.244054   53870 ssh_runner.go:195] Run: crio --version
	I0717 22:51:34.300151   53870 ssh_runner.go:195] Run: crio --version
	I0717 22:51:34.365344   53870 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0717 22:51:35.258235   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:51:35.258266   54573 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:51:35.758740   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:35.767634   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:35.767669   54573 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:36.259368   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:36.269761   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:36.269804   54573 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:36.759179   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:36.767717   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0717 22:51:36.783171   54573 api_server.go:141] control plane version: v1.27.3
	I0717 22:51:36.783277   54573 api_server.go:131] duration metric: took 5.653264463s to wait for apiserver health ...
	I0717 22:51:36.783299   54573 cni.go:84] Creating CNI manager for ""
	I0717 22:51:36.783320   54573 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:36.785787   54573 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:51:32.594699   54649 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:51:32.594791   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:33.112226   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:33.611860   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:34.112071   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:34.611354   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:35.111291   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:35.611869   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:35.637583   54649 api_server.go:72] duration metric: took 3.042882856s to wait for apiserver process to appear ...
	I0717 22:51:35.637607   54649 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:51:35.637624   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:36.787709   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:51:36.808980   54573 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:51:36.862525   54573 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:51:36.878653   54573 system_pods.go:59] 8 kube-system pods found
	I0717 22:51:36.878761   54573 system_pods.go:61] "coredns-5d78c9869d-2mpst" [7516b57f-a4cb-4e2f-995e-8e063bed22ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:51:36.878788   54573 system_pods.go:61] "etcd-no-preload-935524" [b663c4f9-d98e-457d-b511-435bea5e9525] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:51:36.878827   54573 system_pods.go:61] "kube-apiserver-no-preload-935524" [fb6f55fc-8705-46aa-a23c-e7870e52e542] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:51:36.878852   54573 system_pods.go:61] "kube-controller-manager-no-preload-935524" [37d43d22-e857-4a9b-b0cb-a9fc39931baa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:51:36.878874   54573 system_pods.go:61] "kube-proxy-qhp66" [8bc95955-b7ba-41e3-ac67-604a9695f784] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:51:36.878913   54573 system_pods.go:61] "kube-scheduler-no-preload-935524" [86fef4e0-b156-421a-8d53-0d34cae2cdb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:51:36.878940   54573 system_pods.go:61] "metrics-server-74d5c6b9c-tlbpl" [7c478efe-4435-45dd-a688-745872fc2918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:51:36.878959   54573 system_pods.go:61] "storage-provisioner" [85812d54-7a57-430b-991e-e301f123a86a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 22:51:36.878991   54573 system_pods.go:74] duration metric: took 16.439496ms to wait for pod list to return data ...
	I0717 22:51:36.879014   54573 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:51:36.886556   54573 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:51:36.886669   54573 node_conditions.go:123] node cpu capacity is 2
	I0717 22:51:36.886694   54573 node_conditions.go:105] duration metric: took 7.665172ms to run NodePressure ...
	I0717 22:51:36.886743   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:37.408758   54573 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:51:37.426705   54573 kubeadm.go:787] kubelet initialised
	I0717 22:51:37.426750   54573 kubeadm.go:788] duration metric: took 17.898411ms waiting for restarted kubelet to initialise ...
	I0717 22:51:37.426760   54573 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:37.442893   54573 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.449989   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.450020   54573 pod_ready.go:81] duration metric: took 7.096248ms waiting for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.450032   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.450043   54573 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.460343   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "etcd-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.460423   54573 pod_ready.go:81] duration metric: took 10.370601ms waiting for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.460468   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "etcd-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.460481   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.475124   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-apiserver-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.475203   54573 pod_ready.go:81] duration metric: took 14.713192ms waiting for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.475224   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-apiserver-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.475242   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.486443   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.486529   54573 pod_ready.go:81] duration metric: took 11.253247ms waiting for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.486551   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.486570   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:34.367014   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:34.370717   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:34.371243   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:34.371272   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:34.371626   53870 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 22:51:34.380223   53870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:34.395496   53870 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 22:51:34.395564   53870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:34.440412   53870 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 22:51:34.440486   53870 ssh_runner.go:195] Run: which lz4
	I0717 22:51:34.445702   53870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:51:34.451213   53870 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:51:34.451259   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0717 22:51:36.330808   53870 crio.go:444] Took 1.885143 seconds to copy over tarball
	I0717 22:51:36.330866   53870 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:51:33.377108   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:35.379770   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:37.382141   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:37.819308   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-proxy-qhp66" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.819393   54573 pod_ready.go:81] duration metric: took 332.789076ms waiting for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.819414   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-proxy-qhp66" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.819430   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:38.213914   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-scheduler-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.213947   54573 pod_ready.go:81] duration metric: took 394.500573ms waiting for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:38.213957   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-scheduler-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.213967   54573 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:38.617826   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.617855   54573 pod_ready.go:81] duration metric: took 403.88033ms waiting for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:38.617867   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.617878   54573 pod_ready.go:38] duration metric: took 1.191105641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:38.617907   54573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:51:38.634486   54573 ops.go:34] apiserver oom_adj: -16
	I0717 22:51:38.634511   54573 kubeadm.go:640] restartCluster took 21.94326064s
	I0717 22:51:38.634520   54573 kubeadm.go:406] StartCluster complete in 21.998122781s
	I0717 22:51:38.634560   54573 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:38.634648   54573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:51:38.637414   54573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:38.637733   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:51:38.637868   54573 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:51:38.637955   54573 addons.go:69] Setting storage-provisioner=true in profile "no-preload-935524"
	I0717 22:51:38.637972   54573 addons.go:231] Setting addon storage-provisioner=true in "no-preload-935524"
	W0717 22:51:38.637986   54573 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:51:38.638036   54573 host.go:66] Checking if "no-preload-935524" exists ...
	I0717 22:51:38.638418   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.638441   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.638510   54573 addons.go:69] Setting default-storageclass=true in profile "no-preload-935524"
	I0717 22:51:38.638530   54573 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-935524"
	I0717 22:51:38.638684   54573 addons.go:69] Setting metrics-server=true in profile "no-preload-935524"
	I0717 22:51:38.638700   54573 addons.go:231] Setting addon metrics-server=true in "no-preload-935524"
	W0717 22:51:38.638707   54573 addons.go:240] addon metrics-server should already be in state true
	I0717 22:51:38.638751   54573 host.go:66] Checking if "no-preload-935524" exists ...
	I0717 22:51:38.638977   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.639016   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.639083   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.639106   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.644028   54573 config.go:182] Loaded profile config "no-preload-935524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:51:38.656131   54573 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-935524" context rescaled to 1 replicas
	I0717 22:51:38.656182   54573 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:51:38.658128   54573 out.go:177] * Verifying Kubernetes components...
	I0717 22:51:38.659350   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I0717 22:51:38.662767   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:51:38.660678   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46603
	I0717 22:51:38.663403   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.664191   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.664207   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.664296   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46321
	I0717 22:51:38.664660   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.664872   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.665287   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.665301   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.665363   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.666826   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.667345   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.667411   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.667432   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.667875   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.667888   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.669299   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.669907   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.669941   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.689870   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0717 22:51:38.690029   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43747
	I0717 22:51:38.690596   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.691039   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.691052   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.691354   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.691782   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.691932   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.691942   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.692153   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.692209   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.692391   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.693179   54573 addons.go:231] Setting addon default-storageclass=true in "no-preload-935524"
	W0717 22:51:38.693197   54573 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:51:38.693226   54573 host.go:66] Checking if "no-preload-935524" exists ...
	I0717 22:51:38.693599   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.693627   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.695740   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:51:38.698283   54573 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:51:38.696822   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:51:38.700282   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:51:38.700294   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:51:38.700313   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:51:38.702588   54573 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:38.704435   54573 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:51:38.704453   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:51:38.704470   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:51:38.704034   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.704509   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:51:38.704545   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.705314   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:51:38.705704   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:51:38.705962   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:51:38.706101   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:51:38.707998   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.708366   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:51:38.708391   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.708663   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:51:38.708827   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:51:38.708935   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:51:38.709039   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:51:38.715303   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0717 22:51:38.715765   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.716225   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.716238   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.716515   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.716900   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.716915   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.775381   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0717 22:51:38.781850   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.782856   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.782886   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.783335   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.783547   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.786539   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:51:38.786818   54573 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:51:38.786841   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:51:38.786860   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:51:38.789639   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.793649   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:51:38.793678   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:51:38.793701   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.793926   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:51:38.794106   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:51:38.794262   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:51:38.862651   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:51:38.862675   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:51:38.914260   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:51:38.914294   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:51:38.933208   54573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:51:38.959784   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:51:38.959817   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:51:38.977205   54573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:51:39.028067   54573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:51:39.145640   54573 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 22:51:39.145688   54573 node_ready.go:35] waiting up to 6m0s for node "no-preload-935524" to be "Ready" ...
	I0717 22:51:40.593928   54573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.616678929s)
	I0717 22:51:40.593974   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.593987   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.594018   54573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.660755961s)
	I0717 22:51:40.594062   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.594078   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.594360   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.594377   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.594388   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.594397   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.596155   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596173   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.596184   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.596201   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.596345   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.596378   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596393   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.596406   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.596415   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.596536   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.596579   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596597   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.596672   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.596706   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596716   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.766149   54573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.73803779s)
	I0717 22:51:40.766218   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.766233   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.766573   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.766619   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.766629   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.766639   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.766648   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.766954   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.766987   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.766996   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.767004   54573 addons.go:467] Verifying addon metrics-server=true in "no-preload-935524"
	I0717 22:51:40.921642   54573 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 22:51:40.099354   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:51:40.099395   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:51:40.600101   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:40.606334   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:40.606375   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:41.100086   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:41.110410   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:41.110443   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:41.599684   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:41.615650   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:41.615693   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:42.100229   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:42.109347   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:42.109400   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:42.600180   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:42.607799   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 200:
	ok
	I0717 22:51:42.621454   54649 api_server.go:141] control plane version: v1.27.3
	I0717 22:51:42.621480   54649 api_server.go:131] duration metric: took 6.983866635s to wait for apiserver health ...
	I0717 22:51:42.621491   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:51:42.621503   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:42.623222   54649 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:51:41.140227   54573 addons.go:502] enable addons completed in 2.502347716s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 22:51:41.154857   54573 node_ready.go:58] node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:40.037161   53870 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.706262393s)
	I0717 22:51:40.037203   53870 crio.go:451] Took 3.706370 seconds to extract the tarball
	I0717 22:51:40.037215   53870 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:51:40.089356   53870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:40.143494   53870 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 22:51:40.143520   53870 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 22:51:40.143582   53870 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:40.143803   53870 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 22:51:40.143819   53870 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.143889   53870 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.143972   53870 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.143979   53870 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.144036   53870 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.144084   53870 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.151367   53870 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:40.151467   53870 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 22:51:40.152588   53870 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.152741   53870 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.152887   53870 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.152985   53870 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.153357   53870 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.153384   53870 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.317883   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.322240   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.325883   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 22:51:40.325883   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.326725   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.328193   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.356171   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.485259   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:40.493227   53870 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 22:51:40.493266   53870 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.493304   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.514366   53870 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 22:51:40.514409   53870 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.514459   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578201   53870 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 22:51:40.578304   53870 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.578312   53870 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 22:51:40.578342   53870 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.578363   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578396   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578451   53870 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 22:51:40.578485   53870 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.578534   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578248   53870 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 22:51:40.578638   53870 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0717 22:51:40.578247   53870 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 22:51:40.578717   53870 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.578756   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578688   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.717404   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.717482   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.717627   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.717740   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.717814   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0717 22:51:40.717918   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.718015   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.856246   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 22:51:40.856291   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 22:51:40.856403   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 22:51:40.856438   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 22:51:40.856526   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 22:51:40.856575   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 22:51:40.856604   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 22:51:40.856653   53870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0717 22:51:40.861702   53870 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0717 22:51:40.861718   53870 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0717 22:51:40.861766   53870 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0717 22:51:42.019439   53870 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.157649631s)
	I0717 22:51:42.019471   53870 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0717 22:51:42.019512   53870 cache_images.go:92] LoadImages completed in 1.875976905s
	W0717 22:51:42.019588   53870 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0717 22:51:42.019667   53870 ssh_runner.go:195] Run: crio config
	I0717 22:51:42.084276   53870 cni.go:84] Creating CNI manager for ""
	I0717 22:51:42.084310   53870 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:42.084329   53870 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:51:42.084352   53870 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.149 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-332820 NodeName:old-k8s-version-332820 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 22:51:42.084534   53870 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-332820"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-332820
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.149:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:51:42.084631   53870 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-332820 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-332820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:51:42.084705   53870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 22:51:42.095493   53870 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:51:42.095576   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:51:42.106777   53870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 22:51:42.126860   53870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:51:42.146610   53870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0717 22:51:42.167959   53870 ssh_runner.go:195] Run: grep 192.168.50.149	control-plane.minikube.internal$ /etc/hosts
	I0717 22:51:42.171993   53870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:42.188635   53870 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820 for IP: 192.168.50.149
	I0717 22:51:42.188673   53870 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:42.188887   53870 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:51:42.188945   53870 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:51:42.189042   53870 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.key
	I0717 22:51:42.189125   53870 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/apiserver.key.7e281e16
	I0717 22:51:42.189177   53870 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/proxy-client.key
	I0717 22:51:42.189322   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:51:42.189362   53870 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:51:42.189377   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:51:42.189413   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:51:42.189456   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:51:42.189502   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:51:42.189590   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:42.190495   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:51:42.219201   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 22:51:42.248355   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:51:42.275885   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 22:51:42.303987   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:51:42.329331   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:51:42.354424   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:51:42.386422   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:51:42.418872   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:51:42.448869   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:51:42.473306   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:51:42.499302   53870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:51:42.519833   53870 ssh_runner.go:195] Run: openssl version
	I0717 22:51:42.525933   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:51:42.537165   53870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:51:42.545354   53870 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:51:42.545419   53870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:51:42.551786   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:51:42.561900   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:51:42.571880   53870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:42.576953   53870 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:42.577017   53870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:42.583311   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:51:42.593618   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:51:42.604326   53870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:51:42.610022   53870 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:51:42.610084   53870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:51:42.615999   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:51:42.627353   53870 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:51:42.632186   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:51:42.638738   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:51:42.645118   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:51:42.651619   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:51:42.658542   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:51:42.665449   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:51:42.673656   53870 kubeadm.go:404] StartCluster: {Name:old-k8s-version-332820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-332820 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.149 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:51:42.673776   53870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:51:42.673832   53870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:42.718032   53870 cri.go:89] found id: ""
	I0717 22:51:42.718127   53870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:51:42.731832   53870 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:51:42.731856   53870 kubeadm.go:636] restartCluster start
	I0717 22:51:42.731907   53870 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:51:42.741531   53870 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:42.743035   53870 kubeconfig.go:92] found "old-k8s-version-332820" server: "https://192.168.50.149:8443"
	I0717 22:51:42.746440   53870 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:51:42.755816   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:42.755878   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:42.768767   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:39.384892   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:41.876361   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:42.624643   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:51:42.660905   54649 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:51:42.733831   54649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:51:42.761055   54649 system_pods.go:59] 8 kube-system pods found
	I0717 22:51:42.761093   54649 system_pods.go:61] "coredns-5d78c9869d-wpmhl" [ebfdf1a8-16b1-4e11-8bda-0b6afa127ed2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:51:42.761113   54649 system_pods.go:61] "etcd-default-k8s-diff-port-504828" [47338c6f-2509-4051-acaa-7281bbafe376] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:51:42.761125   54649 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504828" [16961d82-f852-4c99-81af-a5b6290222d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:51:42.761138   54649 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504828" [9e226305-9f41-4e56-8f8d-a250f46ab852] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:51:42.761165   54649 system_pods.go:61] "kube-proxy-kbp9x" [5a581d9c-4efa-49b7-8bd9-b877d5d12871] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:51:42.761183   54649 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504828" [0d63a508-5b2b-4b61-b087-afdd063afbfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:51:42.761197   54649 system_pods.go:61] "metrics-server-74d5c6b9c-tj4st" [2cd90033-b07a-4458-8dac-5a618d4ed7ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:51:42.761207   54649 system_pods.go:61] "storage-provisioner" [c306122c-f32a-4455-a825-3e272a114ddc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 22:51:42.761217   54649 system_pods.go:74] duration metric: took 27.36753ms to wait for pod list to return data ...
	I0717 22:51:42.761226   54649 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:51:42.766615   54649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:51:42.766640   54649 node_conditions.go:123] node cpu capacity is 2
	I0717 22:51:42.766651   54649 node_conditions.go:105] duration metric: took 5.41582ms to run NodePressure ...
	I0717 22:51:42.766666   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:43.144614   54649 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:51:43.151192   54649 kubeadm.go:787] kubelet initialised
	I0717 22:51:43.151229   54649 kubeadm.go:788] duration metric: took 6.579448ms waiting for restarted kubelet to initialise ...
	I0717 22:51:43.151245   54649 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:43.157867   54649 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:45.174145   54649 pod_ready.go:102] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:47.177320   54649 pod_ready.go:102] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:43.656678   54573 node_ready.go:58] node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:46.154037   54573 node_ready.go:49] node "no-preload-935524" has status "Ready":"True"
	I0717 22:51:46.154060   54573 node_ready.go:38] duration metric: took 7.008304923s waiting for node "no-preload-935524" to be "Ready" ...
	I0717 22:51:46.154068   54573 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:46.161581   54573 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:46.167554   54573 pod_ready.go:92] pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:46.167581   54573 pod_ready.go:81] duration metric: took 5.973951ms waiting for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:46.167593   54573 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:43.269246   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:43.269363   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:43.281553   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:43.769539   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:43.769648   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:43.784373   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:44.268932   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:44.269030   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:44.280678   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:44.769180   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:44.769268   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:44.782107   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:45.269718   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:45.269795   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:45.282616   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:45.768937   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:45.769014   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:45.782121   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:46.269531   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:46.269628   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:46.281901   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:46.769344   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:46.769437   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:46.784477   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:47.268980   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:47.269070   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:47.280858   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:47.769478   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:47.769577   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:47.783095   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:44.373907   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:46.375240   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:49.671705   54649 pod_ready.go:102] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:50.172053   54649 pod_ready.go:92] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:50.172081   54649 pod_ready.go:81] duration metric: took 7.014190645s waiting for pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:50.172094   54649 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:52.186327   54649 pod_ready.go:102] pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:48.180621   54573 pod_ready.go:92] pod "etcd-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.180653   54573 pod_ready.go:81] duration metric: took 2.0130508s waiting for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.180666   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.185965   54573 pod_ready.go:92] pod "kube-apiserver-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.185985   54573 pod_ready.go:81] duration metric: took 5.310471ms waiting for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.185996   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.191314   54573 pod_ready.go:92] pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.191335   54573 pod_ready.go:81] duration metric: took 5.331248ms waiting for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.191346   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.197557   54573 pod_ready.go:92] pod "kube-proxy-qhp66" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.197576   54573 pod_ready.go:81] duration metric: took 6.222911ms waiting for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.197586   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:50.567470   54573 pod_ready.go:92] pod "kube-scheduler-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:50.567494   54573 pod_ready.go:81] duration metric: took 2.369900836s waiting for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:50.567504   54573 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:52.582697   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:48.269386   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:48.269464   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:48.281178   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:48.769171   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:48.769255   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:48.781163   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:49.269813   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:49.269890   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:49.282099   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:49.769555   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:49.769659   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:49.782298   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:50.269111   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:50.269176   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:50.280805   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:50.769333   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:50.769438   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:50.781760   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:51.269299   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:51.269368   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:51.281559   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:51.769032   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:51.769096   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:51.780505   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:52.269033   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:52.269134   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:52.281362   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:52.755841   53870 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:51:52.755871   53870 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:51:52.755882   53870 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:51:52.755945   53870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:52.789292   53870 cri.go:89] found id: ""
	I0717 22:51:52.789370   53870 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:51:52.805317   53870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:51:52.814714   53870 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:51:52.814778   53870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:52.824024   53870 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:52.824045   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:48.376709   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:50.877922   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:54.187055   54649 pod_ready.go:92] pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.187076   54649 pod_ready.go:81] duration metric: took 4.01497478s waiting for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.187084   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.195396   54649 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.195426   54649 pod_ready.go:81] duration metric: took 8.33448ms waiting for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.195440   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.205666   54649 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.205694   54649 pod_ready.go:81] duration metric: took 10.243213ms waiting for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.205713   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kbp9x" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.217007   54649 pod_ready.go:92] pod "kube-proxy-kbp9x" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.217030   54649 pod_ready.go:81] duration metric: took 11.309771ms waiting for pod "kube-proxy-kbp9x" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.217041   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.225509   54649 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.225558   54649 pod_ready.go:81] duration metric: took 8.507279ms waiting for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.225572   54649 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:56.592993   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:54.582860   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:56.583634   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:52.949663   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:53.985430   53870 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.035733754s)
	I0717 22:51:53.985459   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:54.222833   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:54.357196   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:54.468442   53870 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:51:54.468516   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:54.999095   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:55.499700   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:55.999447   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:56.051829   53870 api_server.go:72] duration metric: took 1.583387644s to wait for apiserver process to appear ...
	I0717 22:51:56.051856   53870 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:51:56.051872   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:51:53.374486   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:55.375033   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:57.376561   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:59.093181   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:01.592585   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:59.084169   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:01.583540   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:01.053643   53870 api_server.go:269] stopped: https://192.168.50.149:8443/healthz: Get "https://192.168.50.149:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 22:52:01.554418   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:01.627371   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:52:01.627400   53870 api_server.go:103] status: https://192.168.50.149:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:52:02.054761   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:02.060403   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 22:52:02.060431   53870 api_server.go:103] status: https://192.168.50.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 22:52:02.554085   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:02.561664   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 22:52:02.561699   53870 api_server.go:103] status: https://192.168.50.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 22:51:59.876307   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:02.374698   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:03.054028   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:03.061055   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 200:
	ok
	I0717 22:52:03.069434   53870 api_server.go:141] control plane version: v1.16.0
	I0717 22:52:03.069465   53870 api_server.go:131] duration metric: took 7.017602055s to wait for apiserver health ...
	I0717 22:52:03.069475   53870 cni.go:84] Creating CNI manager for ""
	I0717 22:52:03.069485   53870 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:52:03.071306   53870 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:52:04.092490   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:06.592435   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:04.082787   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:06.089097   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:03.073009   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:52:03.085399   53870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:52:03.106415   53870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:52:03.117136   53870 system_pods.go:59] 7 kube-system pods found
	I0717 22:52:03.117181   53870 system_pods.go:61] "coredns-5644d7b6d9-s9vtg" [7a1ccabb-ad03-47ef-804a-eff0b00ea65c] Running
	I0717 22:52:03.117191   53870 system_pods.go:61] "etcd-old-k8s-version-332820" [a1c2ef8d-fdb3-4394-944b-042870d25c4b] Running
	I0717 22:52:03.117198   53870 system_pods.go:61] "kube-apiserver-old-k8s-version-332820" [39a09f85-abd5-442a-887d-c04a91b87258] Running
	I0717 22:52:03.117206   53870 system_pods.go:61] "kube-controller-manager-old-k8s-version-332820" [94c599c4-d22c-4b5e-bf7b-ce0b81e21283] Running
	I0717 22:52:03.117212   53870 system_pods.go:61] "kube-proxy-vkjpn" [8fe8844c-f199-4bcb-b6a0-c6023c06ef75] Running
	I0717 22:52:03.117219   53870 system_pods.go:61] "kube-scheduler-old-k8s-version-332820" [a2102927-3de6-45d8-a37e-665adde8ca47] Running
	I0717 22:52:03.117227   53870 system_pods.go:61] "storage-provisioner" [b9bcb25d-294e-49ae-8650-98b1c7e5b4f8] Running
	I0717 22:52:03.117234   53870 system_pods.go:74] duration metric: took 10.793064ms to wait for pod list to return data ...
	I0717 22:52:03.117247   53870 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:52:03.122227   53870 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:52:03.122275   53870 node_conditions.go:123] node cpu capacity is 2
	I0717 22:52:03.122294   53870 node_conditions.go:105] duration metric: took 5.039156ms to run NodePressure ...
	I0717 22:52:03.122322   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:52:03.337823   53870 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:52:03.342104   53870 retry.go:31] will retry after 190.852011ms: kubelet not initialised
	I0717 22:52:03.537705   53870 retry.go:31] will retry after 190.447443ms: kubelet not initialised
	I0717 22:52:03.735450   53870 retry.go:31] will retry after 294.278727ms: kubelet not initialised
	I0717 22:52:04.034965   53870 retry.go:31] will retry after 808.339075ms: kubelet not initialised
	I0717 22:52:04.847799   53870 retry.go:31] will retry after 1.685522396s: kubelet not initialised
	I0717 22:52:06.537765   53870 retry.go:31] will retry after 1.595238483s: kubelet not initialised
	I0717 22:52:04.377461   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:06.876135   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:09.090739   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:11.093234   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:08.583118   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:11.083446   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:08.139297   53870 retry.go:31] will retry after 4.170190829s: kubelet not initialised
	I0717 22:52:12.317346   53870 retry.go:31] will retry after 5.652204651s: kubelet not initialised
	I0717 22:52:09.374610   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:11.375332   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:13.590999   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:15.591041   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:13.583868   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:16.081948   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:13.376027   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:15.874857   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:17.876130   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:17.593544   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:20.092121   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:18.082068   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:20.083496   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:22.582358   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:17.975640   53870 retry.go:31] will retry after 6.695949238s: kubelet not initialised
	I0717 22:52:20.375494   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:22.882209   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:22.591705   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:25.090965   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:25.082268   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:27.582422   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:24.676746   53870 retry.go:31] will retry after 10.942784794s: kubelet not initialised
	I0717 22:52:25.374526   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:27.375728   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:27.591516   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:30.091872   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:30.081334   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:32.082535   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:29.874508   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:31.876648   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:32.592067   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:35.092067   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:34.082954   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:36.585649   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:35.625671   53870 retry.go:31] will retry after 20.23050626s: kubelet not initialised
	I0717 22:52:34.376118   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:36.875654   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:37.592201   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:40.091539   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:39.081430   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:41.082360   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:39.374867   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:41.375759   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:42.590417   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:44.591742   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:46.593256   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:43.083211   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:45.084404   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:47.085099   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:43.376030   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:45.873482   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:47.875479   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:49.092376   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:51.592430   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:49.582087   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:52.083003   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:49.878981   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:52.374685   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:54.090617   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:56.091597   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:54.583455   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:57.081342   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:55.864261   53870 kubeadm.go:787] kubelet initialised
	I0717 22:52:55.864281   53870 kubeadm.go:788] duration metric: took 52.526433839s waiting for restarted kubelet to initialise ...
	I0717 22:52:55.864287   53870 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:52:55.870685   53870 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.877709   53870 pod_ready.go:92] pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.877737   53870 pod_ready.go:81] duration metric: took 7.026411ms waiting for pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.877750   53870 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-vnldz" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.883932   53870 pod_ready.go:92] pod "coredns-5644d7b6d9-vnldz" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.883961   53870 pod_ready.go:81] duration metric: took 6.200731ms waiting for pod "coredns-5644d7b6d9-vnldz" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.883974   53870 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.889729   53870 pod_ready.go:92] pod "etcd-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.889749   53870 pod_ready.go:81] duration metric: took 5.767797ms waiting for pod "etcd-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.889757   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.895286   53870 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.895308   53870 pod_ready.go:81] duration metric: took 5.545198ms waiting for pod "kube-apiserver-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.895316   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.263125   53870 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:56.263153   53870 pod_ready.go:81] duration metric: took 367.829768ms waiting for pod "kube-controller-manager-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.263166   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vkjpn" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.663235   53870 pod_ready.go:92] pod "kube-proxy-vkjpn" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:56.663262   53870 pod_ready.go:81] duration metric: took 400.086969ms waiting for pod "kube-proxy-vkjpn" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.663276   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:57.061892   53870 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:57.061917   53870 pod_ready.go:81] duration metric: took 398.633591ms waiting for pod "kube-scheduler-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:57.061930   53870 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:54.374907   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:56.875242   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:58.092082   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:00.590626   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:59.081826   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:01.086158   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:59.469353   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:01.968383   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:59.374420   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:01.374640   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:02.595710   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:05.094211   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:03.582006   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:05.582348   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:07.582585   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:03.969801   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:06.469220   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:03.374665   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:05.375182   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:07.874673   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:07.593189   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:10.091253   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:10.083277   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:12.581195   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:08.973101   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:11.471187   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:10.375255   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:12.875038   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:12.593192   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:15.090204   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:17.091416   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:14.581962   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:17.082092   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:13.970246   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:16.469918   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:15.374678   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:17.375402   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:19.592518   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:22.090462   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:19.582582   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:21.582788   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:18.969975   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:21.471221   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:19.876416   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:22.377064   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:24.592012   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:26.593013   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:24.082409   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:26.581889   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:23.967680   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:25.969061   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:24.876092   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:26.876727   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:29.090937   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:31.092276   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:28.583371   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:30.588656   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:28.470667   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:30.969719   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:29.374066   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:31.375107   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.590361   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.591199   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.082794   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.583369   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.468669   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.468917   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:37.469656   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.873830   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.875551   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:38.091032   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:40.095610   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:38.083632   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:40.584069   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:39.970389   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:41.972121   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:38.374344   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:40.375117   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:42.873817   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:42.591348   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:44.591801   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:47.091463   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:43.092800   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:45.583147   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:44.468092   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:46.968583   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:44.875165   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:46.875468   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:49.592016   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:52.092191   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:48.082358   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:50.581430   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:52.581722   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:48.970562   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:51.469666   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:49.374655   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:51.374912   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:54.590857   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:57.090986   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:54.581979   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:57.081602   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:53.969845   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:56.470092   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:53.874630   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:56.374076   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:59.093019   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:01.590296   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:59.581481   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:02.081651   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:58.969243   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:00.969793   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:58.874500   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:00.875485   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:03.591663   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:06.091377   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:04.082661   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:06.581409   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:02.969900   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:05.469513   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:07.469630   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:03.374576   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:05.874492   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:07.876025   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:08.092299   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:10.591576   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:08.582962   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:11.081623   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:09.469674   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:11.970568   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:09.878298   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:12.375542   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:13.089815   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:15.091295   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:13.082485   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:15.582545   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:14.469264   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:16.970184   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:14.876188   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:17.375197   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:17.590457   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:19.590668   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:21.592281   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:18.082882   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:20.581232   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:22.581451   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:19.470007   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:21.972545   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:19.874905   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:21.876111   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.090912   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:26.091423   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.582104   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:27.082466   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.468612   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:26.468733   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.375195   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:26.375302   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:28.092426   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:30.590750   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:29.083200   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:31.581109   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:28.469411   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:30.474485   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:28.376063   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:30.874877   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:32.875720   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:32.591688   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:34.592382   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:37.091435   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:33.582072   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:35.582710   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:32.968863   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:34.969408   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:37.469461   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:35.375657   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:37.873420   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:39.091786   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:41.591723   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:38.082103   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:40.582480   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:39.470591   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:41.969425   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:39.876026   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:41.876450   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:44.090732   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:46.091209   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:43.082746   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:45.580745   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:47.581165   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:44.469624   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:46.469853   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:44.375526   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:46.874381   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:48.091542   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:50.591973   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:49.583795   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:52.084521   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:48.969202   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:50.969996   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:48.874772   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:50.876953   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:53.092284   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:55.591945   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:54.582260   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:56.582456   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:53.468921   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:55.469467   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:57.469588   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:53.375369   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:55.375834   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:57.875412   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:58.092340   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:00.593507   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:58.582790   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:01.082714   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:59.968899   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:01.970513   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:59.876100   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:02.377093   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:02.594240   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:05.091858   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:03.584934   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:06.082560   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:04.469605   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:06.470074   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:04.874495   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:06.874619   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:07.591151   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:09.594253   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:12.092136   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:08.082731   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:10.594934   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:08.970358   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:10.972021   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:08.875055   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:10.875177   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:11.360474   54248 pod_ready.go:81] duration metric: took 4m0.00020957s waiting for pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace to be "Ready" ...
	E0717 22:55:11.360506   54248 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:55:11.360523   54248 pod_ready.go:38] duration metric: took 4m12.083431067s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:55:11.360549   54248 kubeadm.go:640] restartCluster took 4m32.267522493s
	W0717 22:55:11.360621   54248 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 22:55:11.360653   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 22:55:14.094015   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:16.590201   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:13.082448   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:15.581674   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:17.582135   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:13.471096   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:15.970057   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:18.591981   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:21.091787   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:19.584462   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:22.082310   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:18.469828   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:20.970377   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:23.092278   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:25.594454   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:24.583377   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:27.082479   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:23.470427   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:25.473350   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:28.091878   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:30.092032   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:29.582576   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:31.584147   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:27.969045   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:30.468478   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:32.469942   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:32.591274   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:34.591477   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:37.089772   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:34.082460   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:36.082687   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:34.470431   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:36.470791   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:39.091253   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:41.091286   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:38.082836   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:40.581494   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:42.583634   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:38.969011   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:40.969922   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:43.092434   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:45.591302   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:45.083869   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:47.582454   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:43.468968   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:45.469340   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:47.471805   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:43.113858   54248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.753186356s)
	I0717 22:55:43.113920   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:55:43.128803   54248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:55:43.138891   54248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:55:43.148155   54248 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:55:43.148209   54248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 22:55:43.357368   54248 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:55:47.591967   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:50.092046   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:52.092670   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:50.081152   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:50.568456   54573 pod_ready.go:81] duration metric: took 4m0.000934324s waiting for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	E0717 22:55:50.568492   54573 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:55:50.568506   54573 pod_ready.go:38] duration metric: took 4m4.414427298s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:55:50.568531   54573 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:55:50.568581   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 22:55:50.568650   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 22:55:50.622016   54573 cri.go:89] found id: "c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:50.622048   54573 cri.go:89] found id: ""
	I0717 22:55:50.622058   54573 logs.go:284] 1 containers: [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f]
	I0717 22:55:50.622114   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.627001   54573 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 22:55:50.627065   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 22:55:50.665053   54573 cri.go:89] found id: "98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:50.665073   54573 cri.go:89] found id: ""
	I0717 22:55:50.665082   54573 logs.go:284] 1 containers: [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea]
	I0717 22:55:50.665143   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.670198   54573 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 22:55:50.670261   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 22:55:50.705569   54573 cri.go:89] found id: "acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:50.705595   54573 cri.go:89] found id: ""
	I0717 22:55:50.705604   54573 logs.go:284] 1 containers: [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266]
	I0717 22:55:50.705669   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.710494   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 22:55:50.710569   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 22:55:50.772743   54573 cri.go:89] found id: "692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:50.772768   54573 cri.go:89] found id: ""
	I0717 22:55:50.772776   54573 logs.go:284] 1 containers: [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629]
	I0717 22:55:50.772831   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.777741   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 22:55:50.777813   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 22:55:50.809864   54573 cri.go:89] found id: "9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:50.809892   54573 cri.go:89] found id: ""
	I0717 22:55:50.809903   54573 logs.go:284] 1 containers: [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567]
	I0717 22:55:50.809963   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.814586   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 22:55:50.814654   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 22:55:50.850021   54573 cri.go:89] found id: "f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:50.850047   54573 cri.go:89] found id: ""
	I0717 22:55:50.850056   54573 logs.go:284] 1 containers: [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c]
	I0717 22:55:50.850125   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.854615   54573 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 22:55:50.854685   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 22:55:50.893272   54573 cri.go:89] found id: ""
	I0717 22:55:50.893300   54573 logs.go:284] 0 containers: []
	W0717 22:55:50.893310   54573 logs.go:286] No container was found matching "kindnet"
	I0717 22:55:50.893318   54573 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 22:55:50.893377   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 22:55:50.926652   54573 cri.go:89] found id: "a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:50.926676   54573 cri.go:89] found id: "4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:50.926682   54573 cri.go:89] found id: ""
	I0717 22:55:50.926690   54573 logs.go:284] 2 containers: [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6]
	I0717 22:55:50.926747   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.931220   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.935745   54573 logs.go:123] Gathering logs for kubelet ...
	I0717 22:55:50.935772   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 22:55:51.002727   54573 logs.go:123] Gathering logs for kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] ...
	I0717 22:55:51.002760   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:51.046774   54573 logs.go:123] Gathering logs for kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] ...
	I0717 22:55:51.046811   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:51.081441   54573 logs.go:123] Gathering logs for storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] ...
	I0717 22:55:51.081472   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:51.119354   54573 logs.go:123] Gathering logs for CRI-O ...
	I0717 22:55:51.119394   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 22:55:51.710591   54573 logs.go:123] Gathering logs for kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] ...
	I0717 22:55:51.710634   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:51.758647   54573 logs.go:123] Gathering logs for storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] ...
	I0717 22:55:51.758679   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:51.792417   54573 logs.go:123] Gathering logs for container status ...
	I0717 22:55:51.792458   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 22:55:51.836268   54573 logs.go:123] Gathering logs for dmesg ...
	I0717 22:55:51.836302   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 22:55:51.852009   54573 logs.go:123] Gathering logs for describe nodes ...
	I0717 22:55:51.852038   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 22:55:52.018156   54573 logs.go:123] Gathering logs for kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] ...
	I0717 22:55:52.018191   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:52.061680   54573 logs.go:123] Gathering logs for etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] ...
	I0717 22:55:52.061723   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:52.105407   54573 logs.go:123] Gathering logs for coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] ...
	I0717 22:55:52.105437   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:49.969074   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:51.969157   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:54.934299   54248 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 22:55:54.934395   54248 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:55:54.934498   54248 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:55:54.934616   54248 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:55:54.934741   54248 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:55:54.934823   54248 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:55:54.936386   54248 out.go:204]   - Generating certificates and keys ...
	I0717 22:55:54.936475   54248 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:55:54.936548   54248 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:55:54.936643   54248 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:55:54.936719   54248 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 22:55:54.936803   54248 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:55:54.936871   54248 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 22:55:54.936947   54248 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 22:55:54.937023   54248 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:55:54.937125   54248 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:55:54.937219   54248 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:55:54.937269   54248 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 22:55:54.937333   54248 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:55:54.937395   54248 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:55:54.937460   54248 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:55:54.937551   54248 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:55:54.937620   54248 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:55:54.937744   54248 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:55:54.937846   54248 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:55:54.937894   54248 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:55:54.937990   54248 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:55:54.939409   54248 out.go:204]   - Booting up control plane ...
	I0717 22:55:54.939534   54248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:55:54.939640   54248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:55:54.939733   54248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:55:54.939867   54248 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:55:54.940059   54248 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:55:54.940157   54248 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504894 seconds
	I0717 22:55:54.940283   54248 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:55:54.940445   54248 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:55:54.940525   54248 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:55:54.940756   54248 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-571296 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:55:54.940829   54248 kubeadm.go:322] [bootstrap-token] Using token: zn3d72.w9x4plx1baw35867
	I0717 22:55:54.942338   54248 out.go:204]   - Configuring RBAC rules ...
	I0717 22:55:54.942484   54248 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:55:54.942583   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:55:54.942759   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:55:54.942920   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:55:54.943088   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:55:54.943207   54248 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:55:54.943365   54248 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:55:54.943433   54248 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:55:54.943527   54248 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:55:54.943541   54248 kubeadm.go:322] 
	I0717 22:55:54.943646   54248 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:55:54.943673   54248 kubeadm.go:322] 
	I0717 22:55:54.943765   54248 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:55:54.943774   54248 kubeadm.go:322] 
	I0717 22:55:54.943814   54248 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:55:54.943906   54248 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:55:54.943997   54248 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:55:54.944009   54248 kubeadm.go:322] 
	I0717 22:55:54.944107   54248 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 22:55:54.944121   54248 kubeadm.go:322] 
	I0717 22:55:54.944194   54248 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:55:54.944204   54248 kubeadm.go:322] 
	I0717 22:55:54.944277   54248 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:55:54.944390   54248 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:55:54.944472   54248 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:55:54.944479   54248 kubeadm.go:322] 
	I0717 22:55:54.944574   54248 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:55:54.944667   54248 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:55:54.944677   54248 kubeadm.go:322] 
	I0717 22:55:54.944778   54248 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zn3d72.w9x4plx1baw35867 \
	I0717 22:55:54.944924   54248 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:55:54.944959   54248 kubeadm.go:322] 	--control-plane 
	I0717 22:55:54.944965   54248 kubeadm.go:322] 
	I0717 22:55:54.945096   54248 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:55:54.945110   54248 kubeadm.go:322] 
	I0717 22:55:54.945206   54248 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zn3d72.w9x4plx1baw35867 \
	I0717 22:55:54.945367   54248 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:55:54.945384   54248 cni.go:84] Creating CNI manager for ""
	I0717 22:55:54.945396   54248 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:55:54.947694   54248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:55:54.092792   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:54.226690   54649 pod_ready.go:81] duration metric: took 4m0.00109908s waiting for pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace to be "Ready" ...
	E0717 22:55:54.226723   54649 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:55:54.226748   54649 pod_ready.go:38] duration metric: took 4m11.075490865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:55:54.226791   54649 kubeadm.go:640] restartCluster took 4m33.196357187s
	W0717 22:55:54.226860   54649 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 22:55:54.226891   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 22:55:54.639076   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:55:54.659284   54573 api_server.go:72] duration metric: took 4m16.00305446s to wait for apiserver process to appear ...
	I0717 22:55:54.659324   54573 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:55:54.659366   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 22:55:54.659437   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 22:55:54.698007   54573 cri.go:89] found id: "c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:54.698036   54573 cri.go:89] found id: ""
	I0717 22:55:54.698045   54573 logs.go:284] 1 containers: [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f]
	I0717 22:55:54.698104   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.704502   54573 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 22:55:54.704584   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 22:55:54.738722   54573 cri.go:89] found id: "98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:54.738752   54573 cri.go:89] found id: ""
	I0717 22:55:54.738761   54573 logs.go:284] 1 containers: [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea]
	I0717 22:55:54.738816   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.743815   54573 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 22:55:54.743888   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 22:55:54.789962   54573 cri.go:89] found id: "acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:54.789992   54573 cri.go:89] found id: ""
	I0717 22:55:54.790003   54573 logs.go:284] 1 containers: [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266]
	I0717 22:55:54.790061   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.796502   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 22:55:54.796577   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 22:55:54.840319   54573 cri.go:89] found id: "692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:54.840349   54573 cri.go:89] found id: ""
	I0717 22:55:54.840358   54573 logs.go:284] 1 containers: [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629]
	I0717 22:55:54.840418   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.847001   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 22:55:54.847074   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 22:55:54.900545   54573 cri.go:89] found id: "9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:54.900571   54573 cri.go:89] found id: ""
	I0717 22:55:54.900578   54573 logs.go:284] 1 containers: [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567]
	I0717 22:55:54.900639   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.905595   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 22:55:54.905703   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 22:55:54.940386   54573 cri.go:89] found id: "f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:54.940405   54573 cri.go:89] found id: ""
	I0717 22:55:54.940414   54573 logs.go:284] 1 containers: [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c]
	I0717 22:55:54.940471   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.947365   54573 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 22:55:54.947444   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 22:55:54.993902   54573 cri.go:89] found id: ""
	I0717 22:55:54.993930   54573 logs.go:284] 0 containers: []
	W0717 22:55:54.993942   54573 logs.go:286] No container was found matching "kindnet"
	I0717 22:55:54.993950   54573 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 22:55:54.994019   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 22:55:55.040159   54573 cri.go:89] found id: "a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:55.040184   54573 cri.go:89] found id: "4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:55.040190   54573 cri.go:89] found id: ""
	I0717 22:55:55.040198   54573 logs.go:284] 2 containers: [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6]
	I0717 22:55:55.040265   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:55.045151   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:55.050805   54573 logs.go:123] Gathering logs for kubelet ...
	I0717 22:55:55.050831   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 22:55:55.123810   54573 logs.go:123] Gathering logs for describe nodes ...
	I0717 22:55:55.123845   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 22:55:55.306589   54573 logs.go:123] Gathering logs for kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] ...
	I0717 22:55:55.306623   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:55.351035   54573 logs.go:123] Gathering logs for kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] ...
	I0717 22:55:55.351083   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:55.416647   54573 logs.go:123] Gathering logs for storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] ...
	I0717 22:55:55.416705   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:55.460413   54573 logs.go:123] Gathering logs for CRI-O ...
	I0717 22:55:55.460452   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 22:55:56.034198   54573 logs.go:123] Gathering logs for container status ...
	I0717 22:55:56.034238   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 22:55:56.073509   54573 logs.go:123] Gathering logs for dmesg ...
	I0717 22:55:56.073552   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 22:55:56.086385   54573 logs.go:123] Gathering logs for kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] ...
	I0717 22:55:56.086413   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:56.132057   54573 logs.go:123] Gathering logs for etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] ...
	I0717 22:55:56.132087   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:56.176634   54573 logs.go:123] Gathering logs for coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] ...
	I0717 22:55:56.176663   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:56.213415   54573 logs.go:123] Gathering logs for kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] ...
	I0717 22:55:56.213451   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:56.248868   54573 logs.go:123] Gathering logs for storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] ...
	I0717 22:55:56.248912   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:53.969902   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:56.470299   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:54.949399   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:55:54.984090   54248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:55:55.014819   54248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:55:55.014950   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:55.015014   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=embed-certs-571296 minikube.k8s.io/updated_at=2023_07_17T22_55_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:55.558851   54248 ops.go:34] apiserver oom_adj: -16
	I0717 22:55:55.558970   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:56.177713   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:56.677742   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:57.177957   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:57.677787   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:58.793638   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:55:58.806705   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0717 22:55:58.808953   54573 api_server.go:141] control plane version: v1.27.3
	I0717 22:55:58.808972   54573 api_server.go:131] duration metric: took 4.149642061s to wait for apiserver health ...
	I0717 22:55:58.808979   54573 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:55:58.808999   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 22:55:58.809042   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 22:55:58.840945   54573 cri.go:89] found id: "c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:58.840965   54573 cri.go:89] found id: ""
	I0717 22:55:58.840972   54573 logs.go:284] 1 containers: [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f]
	I0717 22:55:58.841028   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.845463   54573 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 22:55:58.845557   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 22:55:58.877104   54573 cri.go:89] found id: "98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:58.877134   54573 cri.go:89] found id: ""
	I0717 22:55:58.877143   54573 logs.go:284] 1 containers: [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea]
	I0717 22:55:58.877199   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.881988   54573 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 22:55:58.882060   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 22:55:58.920491   54573 cri.go:89] found id: "acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:58.920520   54573 cri.go:89] found id: ""
	I0717 22:55:58.920530   54573 logs.go:284] 1 containers: [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266]
	I0717 22:55:58.920588   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.925170   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 22:55:58.925239   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 22:55:58.970908   54573 cri.go:89] found id: "692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:58.970928   54573 cri.go:89] found id: ""
	I0717 22:55:58.970937   54573 logs.go:284] 1 containers: [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629]
	I0717 22:55:58.970988   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.976950   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 22:55:58.977005   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 22:55:59.007418   54573 cri.go:89] found id: "9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:59.007438   54573 cri.go:89] found id: ""
	I0717 22:55:59.007445   54573 logs.go:284] 1 containers: [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567]
	I0717 22:55:59.007550   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.012222   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 22:55:59.012279   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 22:55:59.048939   54573 cri.go:89] found id: "f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:59.048960   54573 cri.go:89] found id: ""
	I0717 22:55:59.048968   54573 logs.go:284] 1 containers: [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c]
	I0717 22:55:59.049023   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.053335   54573 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 22:55:59.053400   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 22:55:59.084168   54573 cri.go:89] found id: ""
	I0717 22:55:59.084198   54573 logs.go:284] 0 containers: []
	W0717 22:55:59.084208   54573 logs.go:286] No container was found matching "kindnet"
	I0717 22:55:59.084221   54573 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 22:55:59.084270   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 22:55:59.117213   54573 cri.go:89] found id: "a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:59.117237   54573 cri.go:89] found id: "4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:59.117244   54573 cri.go:89] found id: ""
	I0717 22:55:59.117252   54573 logs.go:284] 2 containers: [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6]
	I0717 22:55:59.117311   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.122816   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.127074   54573 logs.go:123] Gathering logs for dmesg ...
	I0717 22:55:59.127095   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 22:55:59.142525   54573 logs.go:123] Gathering logs for kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] ...
	I0717 22:55:59.142557   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:59.190652   54573 logs.go:123] Gathering logs for kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] ...
	I0717 22:55:59.190690   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:59.231512   54573 logs.go:123] Gathering logs for kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] ...
	I0717 22:55:59.231547   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:59.280732   54573 logs.go:123] Gathering logs for storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] ...
	I0717 22:55:59.280767   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:59.318213   54573 logs.go:123] Gathering logs for CRI-O ...
	I0717 22:55:59.318237   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 22:55:59.872973   54573 logs.go:123] Gathering logs for container status ...
	I0717 22:55:59.873017   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 22:55:59.911891   54573 logs.go:123] Gathering logs for kubelet ...
	I0717 22:55:59.911918   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 22:55:59.976450   54573 logs.go:123] Gathering logs for describe nodes ...
	I0717 22:55:59.976483   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 22:56:00.099556   54573 logs.go:123] Gathering logs for etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] ...
	I0717 22:56:00.099592   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:56:00.145447   54573 logs.go:123] Gathering logs for coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] ...
	I0717 22:56:00.145479   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:56:00.181246   54573 logs.go:123] Gathering logs for kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] ...
	I0717 22:56:00.181277   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:56:00.221127   54573 logs.go:123] Gathering logs for storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] ...
	I0717 22:56:00.221150   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:56:02.761729   54573 system_pods.go:59] 8 kube-system pods found
	I0717 22:56:02.761758   54573 system_pods.go:61] "coredns-5d78c9869d-2mpst" [7516b57f-a4cb-4e2f-995e-8e063bed22ae] Running
	I0717 22:56:02.761765   54573 system_pods.go:61] "etcd-no-preload-935524" [b663c4f9-d98e-457d-b511-435bea5e9525] Running
	I0717 22:56:02.761772   54573 system_pods.go:61] "kube-apiserver-no-preload-935524" [fb6f55fc-8705-46aa-a23c-e7870e52e542] Running
	I0717 22:56:02.761778   54573 system_pods.go:61] "kube-controller-manager-no-preload-935524" [37d43d22-e857-4a9b-b0cb-a9fc39931baa] Running
	I0717 22:56:02.761783   54573 system_pods.go:61] "kube-proxy-qhp66" [8bc95955-b7ba-41e3-ac67-604a9695f784] Running
	I0717 22:56:02.761790   54573 system_pods.go:61] "kube-scheduler-no-preload-935524" [86fef4e0-b156-421a-8d53-0d34cae2cdb3] Running
	I0717 22:56:02.761800   54573 system_pods.go:61] "metrics-server-74d5c6b9c-tlbpl" [7c478efe-4435-45dd-a688-745872fc2918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:56:02.761809   54573 system_pods.go:61] "storage-provisioner" [85812d54-7a57-430b-991e-e301f123a86a] Running
	I0717 22:56:02.761823   54573 system_pods.go:74] duration metric: took 3.952838173s to wait for pod list to return data ...
	I0717 22:56:02.761837   54573 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:56:02.764526   54573 default_sa.go:45] found service account: "default"
	I0717 22:56:02.764547   54573 default_sa.go:55] duration metric: took 2.700233ms for default service account to be created ...
	I0717 22:56:02.764556   54573 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:56:02.770288   54573 system_pods.go:86] 8 kube-system pods found
	I0717 22:56:02.770312   54573 system_pods.go:89] "coredns-5d78c9869d-2mpst" [7516b57f-a4cb-4e2f-995e-8e063bed22ae] Running
	I0717 22:56:02.770318   54573 system_pods.go:89] "etcd-no-preload-935524" [b663c4f9-d98e-457d-b511-435bea5e9525] Running
	I0717 22:56:02.770323   54573 system_pods.go:89] "kube-apiserver-no-preload-935524" [fb6f55fc-8705-46aa-a23c-e7870e52e542] Running
	I0717 22:56:02.770327   54573 system_pods.go:89] "kube-controller-manager-no-preload-935524" [37d43d22-e857-4a9b-b0cb-a9fc39931baa] Running
	I0717 22:56:02.770330   54573 system_pods.go:89] "kube-proxy-qhp66" [8bc95955-b7ba-41e3-ac67-604a9695f784] Running
	I0717 22:56:02.770334   54573 system_pods.go:89] "kube-scheduler-no-preload-935524" [86fef4e0-b156-421a-8d53-0d34cae2cdb3] Running
	I0717 22:56:02.770340   54573 system_pods.go:89] "metrics-server-74d5c6b9c-tlbpl" [7c478efe-4435-45dd-a688-745872fc2918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:56:02.770346   54573 system_pods.go:89] "storage-provisioner" [85812d54-7a57-430b-991e-e301f123a86a] Running
	I0717 22:56:02.770354   54573 system_pods.go:126] duration metric: took 5.793179ms to wait for k8s-apps to be running ...
	I0717 22:56:02.770362   54573 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:56:02.770410   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:02.786132   54573 system_svc.go:56] duration metric: took 15.760975ms WaitForService to wait for kubelet.
	I0717 22:56:02.786161   54573 kubeadm.go:581] duration metric: took 4m24.129949995s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:56:02.786182   54573 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:56:02.789957   54573 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:56:02.789978   54573 node_conditions.go:123] node cpu capacity is 2
	I0717 22:56:02.789988   54573 node_conditions.go:105] duration metric: took 3.802348ms to run NodePressure ...
	I0717 22:56:02.789999   54573 start.go:228] waiting for startup goroutines ...
	I0717 22:56:02.790008   54573 start.go:233] waiting for cluster config update ...
	I0717 22:56:02.790021   54573 start.go:242] writing updated cluster config ...
	I0717 22:56:02.790308   54573 ssh_runner.go:195] Run: rm -f paused
	I0717 22:56:02.840154   54573 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 22:56:02.843243   54573 out.go:177] * Done! kubectl is now configured to use "no-preload-935524" cluster and "default" namespace by default
	I0717 22:55:58.471229   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:00.969263   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:58.177892   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:58.677211   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:59.177916   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:59.678088   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:00.177933   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:00.678096   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:01.177184   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:01.677152   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:02.177561   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:02.677947   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:02.970089   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:05.470783   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:03.177870   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:03.677715   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:04.177238   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:04.677261   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:05.177220   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:05.678164   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:06.177948   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:06.677392   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:07.177167   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:07.678131   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:07.945881   54248 kubeadm.go:1081] duration metric: took 12.930982407s to wait for elevateKubeSystemPrivileges.
	I0717 22:56:07.945928   54248 kubeadm.go:406] StartCluster complete in 5m28.89261834s
	I0717 22:56:07.945958   54248 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:07.946058   54248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:56:07.948004   54248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:07.948298   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:56:07.948538   54248 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:56:07.948628   54248 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-571296"
	I0717 22:56:07.948639   54248 config.go:182] Loaded profile config "embed-certs-571296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:56:07.948657   54248 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-571296"
	W0717 22:56:07.948669   54248 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:56:07.948687   54248 addons.go:69] Setting default-storageclass=true in profile "embed-certs-571296"
	I0717 22:56:07.948708   54248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-571296"
	I0717 22:56:07.948713   54248 host.go:66] Checking if "embed-certs-571296" exists ...
	I0717 22:56:07.949078   54248 addons.go:69] Setting metrics-server=true in profile "embed-certs-571296"
	I0717 22:56:07.949100   54248 addons.go:231] Setting addon metrics-server=true in "embed-certs-571296"
	I0717 22:56:07.949101   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	W0717 22:56:07.949107   54248 addons.go:240] addon metrics-server should already be in state true
	I0717 22:56:07.949126   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.949148   54248 host.go:66] Checking if "embed-certs-571296" exists ...
	I0717 22:56:07.949361   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.949390   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.949481   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.949508   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.967136   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I0717 22:56:07.967705   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.967874   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43925
	I0717 22:56:07.968286   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.968317   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.968395   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.968741   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.969000   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.969019   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.969056   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:07.969416   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.969964   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.969993   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.970220   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I0717 22:56:07.970682   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.971172   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.971194   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.971603   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.972617   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.972655   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.988352   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38131
	I0717 22:56:07.988872   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.989481   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.989507   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.989913   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.990198   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:07.992174   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:56:07.992359   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34283
	I0717 22:56:07.993818   54248 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:56:07.995350   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:56:07.995373   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:56:07.995393   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:56:07.992931   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.995909   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.995933   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.996276   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.996424   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:07.998630   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:56:08.000660   54248 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:07.999385   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:07.999983   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:56:08.002498   54248 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:08.002510   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:56:08.002529   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:56:08.002556   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:56:08.002587   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:56:08.002626   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.002714   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:56:08.002874   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:56:08.003290   54248 addons.go:231] Setting addon default-storageclass=true in "embed-certs-571296"
	W0717 22:56:08.003311   54248 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:56:08.003340   54248 host.go:66] Checking if "embed-certs-571296" exists ...
	I0717 22:56:08.003736   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:08.003763   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:08.005771   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.006163   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:56:08.006194   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.006393   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:56:08.006560   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:56:08.006744   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:56:08.006890   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:56:08.025042   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0717 22:56:08.025743   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:08.026232   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:08.026252   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:08.026732   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:08.027295   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:08.027340   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:08.044326   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40863
	I0717 22:56:08.044743   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:08.045285   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:08.045309   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:08.045686   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:08.045900   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:08.047695   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:56:08.047962   54248 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:08.047980   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:56:08.048000   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:56:08.050685   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.051084   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:56:08.051115   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.051376   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:56:08.051561   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:56:08.051762   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:56:08.051880   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:56:08.221022   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:56:08.221057   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:56:08.262777   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:56:08.286077   54248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:08.301703   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:56:08.301728   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:56:08.314524   54248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:08.370967   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:08.370989   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:56:08.585011   54248 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-571296" context rescaled to 1 replicas
	I0717 22:56:08.585061   54248 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:56:08.587143   54248 out.go:177] * Verifying Kubernetes components...
	I0717 22:56:08.588842   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:08.666555   54248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:10.506154   54248 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.243338067s)
	I0717 22:56:10.506244   54248 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0717 22:56:11.016648   54248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.730514867s)
	I0717 22:56:11.016699   54248 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.427824424s)
	I0717 22:56:11.016659   54248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.702100754s)
	I0717 22:56:11.016728   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.016733   54248 node_ready.go:35] waiting up to 6m0s for node "embed-certs-571296" to be "Ready" ...
	I0717 22:56:11.016742   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.016707   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.016862   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017139   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.017150   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017165   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.017168   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017175   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.017177   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.017183   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.017186   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017196   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.017242   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017409   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017425   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.017443   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.017452   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017571   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017600   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.018689   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.018706   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.018703   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.043490   54248 node_ready.go:49] node "embed-certs-571296" has status "Ready":"True"
	I0717 22:56:11.043511   54248 node_ready.go:38] duration metric: took 26.766819ms waiting for node "embed-certs-571296" to be "Ready" ...
	I0717 22:56:11.043518   54248 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:56:11.057095   54248 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-6ljtn" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:11.116641   54248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.450034996s)
	I0717 22:56:11.116706   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.116724   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.117015   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.117034   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.117046   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.117058   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.117341   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.117389   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.117408   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.117427   54248 addons.go:467] Verifying addon metrics-server=true in "embed-certs-571296"
	I0717 22:56:11.119741   54248 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 22:56:07.979850   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:10.471118   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:12.472257   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:11.122047   54248 addons.go:502] enable addons completed in 3.173503334s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 22:56:12.605075   54248 pod_ready.go:92] pod "coredns-5d78c9869d-6ljtn" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.605111   54248 pod_ready.go:81] duration metric: took 1.547984916s waiting for pod "coredns-5d78c9869d-6ljtn" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.605126   54248 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-tq27r" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.619682   54248 pod_ready.go:92] pod "coredns-5d78c9869d-tq27r" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.619710   54248 pod_ready.go:81] duration metric: took 14.576786ms waiting for pod "coredns-5d78c9869d-tq27r" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.619722   54248 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.628850   54248 pod_ready.go:92] pod "etcd-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.628878   54248 pod_ready.go:81] duration metric: took 9.147093ms waiting for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.628889   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.641360   54248 pod_ready.go:92] pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.641381   54248 pod_ready.go:81] duration metric: took 12.485183ms waiting for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.641391   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.656634   54248 pod_ready.go:92] pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.656663   54248 pod_ready.go:81] duration metric: took 15.264878ms waiting for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.656677   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xjpds" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:14.480168   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:16.969340   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:13.530098   54248 pod_ready.go:92] pod "kube-proxy-xjpds" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:13.530129   54248 pod_ready.go:81] duration metric: took 873.444575ms waiting for pod "kube-proxy-xjpds" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:13.530144   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:13.821592   54248 pod_ready.go:92] pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:13.821615   54248 pod_ready.go:81] duration metric: took 291.46393ms waiting for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:13.821625   54248 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:16.228210   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:19.470498   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:21.969531   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:18.228289   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:20.228420   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:22.228472   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:26.250616   54649 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.023698231s)
	I0717 22:56:26.250690   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:26.264095   54649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:56:26.274295   54649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:56:26.284265   54649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:56:26.284332   54649 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 22:56:26.341601   54649 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 22:56:26.341719   54649 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:56:26.507992   54649 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:56:26.508194   54649 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:56:26.508344   54649 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:56:26.684682   54649 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:56:26.686603   54649 out.go:204]   - Generating certificates and keys ...
	I0717 22:56:26.686753   54649 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:56:26.686833   54649 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:56:26.686963   54649 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:56:26.687386   54649 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 22:56:26.687802   54649 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:56:26.688484   54649 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 22:56:26.689007   54649 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 22:56:26.689618   54649 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:56:26.690234   54649 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:56:26.690845   54649 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:56:26.691391   54649 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 22:56:26.691484   54649 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:56:26.793074   54649 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:56:26.956354   54649 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:56:27.033560   54649 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:56:27.222598   54649 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:56:27.242695   54649 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:56:27.243923   54649 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:56:27.244009   54649 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:56:27.382359   54649 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:56:27.385299   54649 out.go:204]   - Booting up control plane ...
	I0717 22:56:27.385459   54649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:56:27.385595   54649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:56:27.385699   54649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:56:27.386230   54649 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:56:27.388402   54649 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:56:24.469634   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:26.470480   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:24.231654   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:26.728390   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:28.471360   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:30.493443   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:28.728821   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:30.729474   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:32.731419   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:35.894189   54649 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505577 seconds
	I0717 22:56:35.894298   54649 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:56:35.922569   54649 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:56:36.459377   54649 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:56:36.459628   54649 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-504828 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:56:36.981248   54649 kubeadm.go:322] [bootstrap-token] Using token: aq0fl5.e7xnmbjqmeipfdlw
	I0717 22:56:36.983221   54649 out.go:204]   - Configuring RBAC rules ...
	I0717 22:56:36.983401   54649 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:56:37.001576   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:56:37.012679   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:56:37.018002   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:56:37.025356   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:56:37.030822   54649 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:56:37.049741   54649 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:56:37.309822   54649 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:56:37.414906   54649 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:56:37.414947   54649 kubeadm.go:322] 
	I0717 22:56:37.415023   54649 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:56:37.415035   54649 kubeadm.go:322] 
	I0717 22:56:37.415135   54649 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:56:37.415145   54649 kubeadm.go:322] 
	I0717 22:56:37.415190   54649 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:56:37.415290   54649 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:56:37.415373   54649 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:56:37.415383   54649 kubeadm.go:322] 
	I0717 22:56:37.415495   54649 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 22:56:37.415529   54649 kubeadm.go:322] 
	I0717 22:56:37.415593   54649 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:56:37.415602   54649 kubeadm.go:322] 
	I0717 22:56:37.415677   54649 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:56:37.415755   54649 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:56:37.415892   54649 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:56:37.415904   54649 kubeadm.go:322] 
	I0717 22:56:37.416034   54649 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:56:37.416151   54649 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:56:37.416172   54649 kubeadm.go:322] 
	I0717 22:56:37.416306   54649 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token aq0fl5.e7xnmbjqmeipfdlw \
	I0717 22:56:37.416451   54649 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:56:37.416478   54649 kubeadm.go:322] 	--control-plane 
	I0717 22:56:37.416487   54649 kubeadm.go:322] 
	I0717 22:56:37.416596   54649 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:56:37.416606   54649 kubeadm.go:322] 
	I0717 22:56:37.416708   54649 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token aq0fl5.e7xnmbjqmeipfdlw \
	I0717 22:56:37.416850   54649 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:56:37.417385   54649 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:56:37.417413   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:56:37.417426   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:56:37.419367   54649 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:56:37.421047   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:56:37.456430   54649 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:56:37.520764   54649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:56:37.520861   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:37.520877   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=default-k8s-diff-port-504828 minikube.k8s.io/updated_at=2023_07_17T22_56_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:32.970043   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:35.469085   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:35.257714   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:37.730437   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:37.914888   54649 ops.go:34] apiserver oom_adj: -16
	I0717 22:56:37.914920   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:38.508471   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:39.008147   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:39.508371   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:40.008059   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:40.508319   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:41.008945   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:41.507958   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:42.008509   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:42.508920   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:37.969711   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:39.970230   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:42.468790   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:40.227771   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:42.228268   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:43.008542   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:43.508809   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:44.008922   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:44.508771   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:45.008681   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:45.507925   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:46.008078   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:46.508950   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:47.008902   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:47.508705   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:44.470199   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:46.969467   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:44.728843   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:46.729321   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:48.008736   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:48.508008   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:49.008524   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:49.508783   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:50.008620   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:50.508131   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:50.675484   54649 kubeadm.go:1081] duration metric: took 13.154682677s to wait for elevateKubeSystemPrivileges.
	I0717 22:56:50.675522   54649 kubeadm.go:406] StartCluster complete in 5m29.688096626s
	I0717 22:56:50.675542   54649 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:50.675625   54649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:56:50.678070   54649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:50.678358   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:56:50.678397   54649 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:56:50.678485   54649 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-504828"
	I0717 22:56:50.678504   54649 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-504828"
	I0717 22:56:50.678504   54649 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-504828"
	W0717 22:56:50.678515   54649 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:56:50.678526   54649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-504828"
	I0717 22:56:50.678537   54649 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-504828"
	I0717 22:56:50.678557   54649 host.go:66] Checking if "default-k8s-diff-port-504828" exists ...
	I0717 22:56:50.678561   54649 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-504828"
	W0717 22:56:50.678571   54649 addons.go:240] addon metrics-server should already be in state true
	I0717 22:56:50.678630   54649 host.go:66] Checking if "default-k8s-diff-port-504828" exists ...
	I0717 22:56:50.678570   54649 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:56:50.678961   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.678995   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.679011   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.679039   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.678962   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.679094   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.696229   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0717 22:56:50.696669   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.697375   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.697414   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.697831   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.698436   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.698474   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.698998   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
	I0717 22:56:50.699168   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41135
	I0717 22:56:50.699382   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.699530   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.699812   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.699824   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.700021   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.700044   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.700219   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.700385   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.700570   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.700748   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.700785   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.715085   54649 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-504828"
	W0717 22:56:50.715119   54649 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:56:50.715149   54649 host.go:66] Checking if "default-k8s-diff-port-504828" exists ...
	I0717 22:56:50.715547   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.715580   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.715831   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41743
	I0717 22:56:50.716347   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.716905   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.716921   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.717285   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.717334   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41035
	I0717 22:56:50.717493   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.717699   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.718238   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.718257   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.718580   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.718843   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.719486   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:56:50.721699   54649 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:56:50.723464   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:56:50.723484   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:56:50.720832   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:56:50.723509   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:56:50.725600   54649 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:50.728061   54649 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:50.726758   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.727455   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:56:50.728105   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:56:50.728133   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:56:50.728134   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:56:50.728166   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.728380   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:56:50.728785   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:56:50.728938   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:56:50.731891   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.732348   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:56:50.732379   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.732589   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:56:50.732793   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:56:50.732974   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:56:50.733113   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:56:50.741098   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0717 22:56:50.741744   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.742386   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.742410   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.742968   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.743444   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.743490   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.759985   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38185
	I0717 22:56:50.760547   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.761145   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.761171   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.761598   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.761779   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.763276   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:56:50.763545   54649 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:50.763559   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:56:50.763574   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:56:50.766525   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.766964   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:56:50.766995   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.767254   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:56:50.767444   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:56:50.767636   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:56:50.767803   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:56:50.963671   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:56:50.963698   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:56:50.982828   54649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:50.985884   54649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:50.989077   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:56:51.020140   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:56:51.020174   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:56:51.094548   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:51.094574   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:56:51.185896   54649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:51.238666   54649 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-504828" context rescaled to 1 replicas
	I0717 22:56:51.238704   54649 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:56:51.241792   54649 out.go:177] * Verifying Kubernetes components...
	I0717 22:56:51.243720   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:49.470925   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:51.970366   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:48.732421   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:50.742608   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:52.980991   54649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.998121603s)
	I0717 22:56:52.981060   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:52.981078   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:52.981422   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:52.981424   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:52.981460   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:52.981472   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:52.981486   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:52.981815   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:52.981906   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:52.981923   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:52.981962   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:52.981979   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:52.982328   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:52.982335   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:52.982352   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.384207   54649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.398283926s)
	I0717 22:56:53.384259   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.384263   54649 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.39515958s)
	I0717 22:56:53.384272   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.384280   54649 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0717 22:56:53.384588   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:53.384664   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.384680   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.384694   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.384711   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.385419   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.385438   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.385446   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:53.810615   54649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.624668019s)
	I0717 22:56:53.810613   54649 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.5668435s)
	I0717 22:56:53.810690   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.810712   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.810717   54649 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-504828" to be "Ready" ...
	I0717 22:56:53.811092   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:53.811172   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.811191   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.811209   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.811223   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.811501   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.811519   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.811529   54649 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-504828"
	I0717 22:56:53.813588   54649 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 22:56:53.815209   54649 addons.go:502] enable addons completed in 3.136812371s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 22:56:53.848049   54649 node_ready.go:49] node "default-k8s-diff-port-504828" has status "Ready":"True"
	I0717 22:56:53.848070   54649 node_ready.go:38] duration metric: took 37.336626ms waiting for node "default-k8s-diff-port-504828" to be "Ready" ...
	I0717 22:56:53.848078   54649 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:56:53.869392   54649 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-rqcjj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.922409   54649 pod_ready.go:92] pod "coredns-5d78c9869d-rqcjj" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.922433   54649 pod_ready.go:81] duration metric: took 2.05301467s waiting for pod "coredns-5d78c9869d-rqcjj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.922442   54649 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-xz4mj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.930140   54649 pod_ready.go:92] pod "coredns-5d78c9869d-xz4mj" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.930162   54649 pod_ready.go:81] duration metric: took 7.714745ms waiting for pod "coredns-5d78c9869d-xz4mj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.930171   54649 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.938968   54649 pod_ready.go:92] pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.938994   54649 pod_ready.go:81] duration metric: took 8.813777ms waiting for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.939006   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.950100   54649 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.950127   54649 pod_ready.go:81] duration metric: took 11.110719ms waiting for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.950141   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.956205   54649 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.956228   54649 pod_ready.go:81] duration metric: took 6.078268ms waiting for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.956240   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmtc8" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.318975   54649 pod_ready.go:92] pod "kube-proxy-nmtc8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:56.319002   54649 pod_ready.go:81] duration metric: took 362.754902ms waiting for pod "kube-proxy-nmtc8" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.319012   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.725010   54649 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:56.725042   54649 pod_ready.go:81] duration metric: took 406.022192ms waiting for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.725059   54649 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:53.971176   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:56.468730   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:57.063020   53870 pod_ready.go:81] duration metric: took 4m0.001070587s waiting for pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace to be "Ready" ...
	E0717 22:56:57.063061   53870 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:56:57.063088   53870 pod_ready.go:38] duration metric: took 4m1.198793286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:56:57.063114   53870 kubeadm.go:640] restartCluster took 5m14.33125167s
	W0717 22:56:57.063164   53870 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 22:56:57.063188   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 22:56:53.230170   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:55.230713   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:57.729746   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:59.128445   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:01.628013   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:59.730555   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:02.228533   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:03.628469   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:06.127096   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:04.228878   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:06.229004   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:08.128257   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:10.128530   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:12.128706   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:10.086799   53870 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.023585108s)
	I0717 22:57:10.086877   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:57:10.102476   53870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:57:10.112904   53870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:57:10.123424   53870 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:57:10.123471   53870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0717 22:57:10.352747   53870 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:57:08.232655   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:10.730595   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:14.129308   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:16.627288   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:13.230023   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:15.730720   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:18.628332   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:20.629305   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:18.227910   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:20.228411   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:22.230069   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:23.708206   53870 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 22:57:23.708283   53870 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:57:23.708382   53870 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:57:23.708529   53870 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:57:23.708651   53870 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:57:23.708789   53870 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:57:23.708916   53870 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:57:23.708988   53870 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 22:57:23.709078   53870 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:57:23.710652   53870 out.go:204]   - Generating certificates and keys ...
	I0717 22:57:23.710759   53870 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:57:23.710840   53870 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:57:23.710959   53870 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:57:23.711058   53870 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 22:57:23.711156   53870 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:57:23.711234   53870 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 22:57:23.711314   53870 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 22:57:23.711415   53870 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:57:23.711522   53870 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:57:23.711635   53870 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:57:23.711697   53870 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 22:57:23.711776   53870 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:57:23.711831   53870 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:57:23.711892   53870 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:57:23.711978   53870 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:57:23.712048   53870 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:57:23.712136   53870 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:57:23.713799   53870 out.go:204]   - Booting up control plane ...
	I0717 22:57:23.713909   53870 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:57:23.714033   53870 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:57:23.714145   53870 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:57:23.714268   53870 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:57:23.714418   53870 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:57:23.714483   53870 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004162 seconds
	I0717 22:57:23.714656   53870 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:57:23.714846   53870 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:57:23.714929   53870 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:57:23.715088   53870 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-332820 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 22:57:23.715170   53870 kubeadm.go:322] [bootstrap-token] Using token: sjemvm.5nuhmbx5uh7jm9fo
	I0717 22:57:23.716846   53870 out.go:204]   - Configuring RBAC rules ...
	I0717 22:57:23.716937   53870 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:57:23.717067   53870 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:57:23.717210   53870 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:57:23.717333   53870 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:57:23.717414   53870 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:57:23.717456   53870 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:57:23.717494   53870 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:57:23.717501   53870 kubeadm.go:322] 
	I0717 22:57:23.717564   53870 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:57:23.717571   53870 kubeadm.go:322] 
	I0717 22:57:23.717636   53870 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:57:23.717641   53870 kubeadm.go:322] 
	I0717 22:57:23.717662   53870 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:57:23.717733   53870 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:57:23.717783   53870 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:57:23.717791   53870 kubeadm.go:322] 
	I0717 22:57:23.717839   53870 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:57:23.717946   53870 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:57:23.718040   53870 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:57:23.718052   53870 kubeadm.go:322] 
	I0717 22:57:23.718172   53870 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0717 22:57:23.718289   53870 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:57:23.718299   53870 kubeadm.go:322] 
	I0717 22:57:23.718373   53870 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sjemvm.5nuhmbx5uh7jm9fo \
	I0717 22:57:23.718476   53870 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:57:23.718525   53870 kubeadm.go:322]     --control-plane 	  
	I0717 22:57:23.718539   53870 kubeadm.go:322] 
	I0717 22:57:23.718624   53870 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:57:23.718631   53870 kubeadm.go:322] 
	I0717 22:57:23.718703   53870 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sjemvm.5nuhmbx5uh7jm9fo \
	I0717 22:57:23.718812   53870 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:57:23.718825   53870 cni.go:84] Creating CNI manager for ""
	I0717 22:57:23.718834   53870 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:57:23.720891   53870 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:57:23.128941   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:25.129405   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:27.129595   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:23.722935   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:57:23.738547   53870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:57:23.764002   53870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:57:23.764109   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=old-k8s-version-332820 minikube.k8s.io/updated_at=2023_07_17T22_57_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:23.764127   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:23.835900   53870 ops.go:34] apiserver oom_adj: -16
	I0717 22:57:24.015975   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:24.622866   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:25.122754   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:25.622733   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:26.123442   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:26.623190   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:27.123191   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:27.622408   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:24.729678   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:26.730278   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:29.629588   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:32.130357   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:28.122555   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:28.622771   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:29.122717   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:29.622760   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:30.123186   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:30.622731   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:31.122724   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:31.622957   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:32.122775   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:32.622552   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:29.228462   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:31.232382   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:34.629160   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:37.128209   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:33.122703   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:33.623262   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:34.122574   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:34.623130   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:35.122819   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:35.622426   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:36.123262   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:36.622474   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:37.122820   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:37.623414   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:33.244514   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:35.735391   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:38.123076   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:38.622497   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:39.122826   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:39.220042   53870 kubeadm.go:1081] duration metric: took 15.45599881s to wait for elevateKubeSystemPrivileges.
	I0717 22:57:39.220076   53870 kubeadm.go:406] StartCluster complete in 5m56.5464295s
	I0717 22:57:39.220095   53870 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:57:39.220173   53870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:57:39.221940   53870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:57:39.222201   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:57:39.222371   53870 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:57:39.222458   53870 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-332820"
	I0717 22:57:39.222474   53870 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-332820"
	W0717 22:57:39.222486   53870 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:57:39.222517   53870 config.go:182] Loaded profile config "old-k8s-version-332820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 22:57:39.222533   53870 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-332820"
	I0717 22:57:39.222544   53870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-332820"
	I0717 22:57:39.222528   53870 host.go:66] Checking if "old-k8s-version-332820" exists ...
	I0717 22:57:39.222906   53870 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-332820"
	I0717 22:57:39.222947   53870 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-332820"
	I0717 22:57:39.222955   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.222965   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.222978   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.222989   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0717 22:57:39.222958   53870 addons.go:240] addon metrics-server should already be in state true
	I0717 22:57:39.223266   53870 host.go:66] Checking if "old-k8s-version-332820" exists ...
	I0717 22:57:39.223611   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.223644   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.241834   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0717 22:57:39.242161   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0717 22:57:39.242290   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0717 22:57:39.242409   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.242525   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.242699   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.242983   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.242995   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.243079   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.243085   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.243146   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.243152   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.243455   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.243499   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.243923   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.243955   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.244114   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.244145   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.244609   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.244636   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.264113   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38423
	I0717 22:57:39.264664   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.265196   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.265217   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.265738   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.265990   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.267754   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:57:39.269600   53870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:57:39.269649   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37175
	I0717 22:57:39.271155   53870 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:57:39.271170   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:57:39.271196   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:57:39.271008   53870 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-332820"
	W0717 22:57:39.271246   53870 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:57:39.271278   53870 host.go:66] Checking if "old-k8s-version-332820" exists ...
	I0717 22:57:39.271539   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.271564   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.271582   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.272088   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.272112   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.272450   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.272628   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.275001   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:57:39.276178   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.276580   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:57:39.276603   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.276866   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:57:39.277046   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:57:39.277173   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:57:39.277284   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:57:39.279594   53870 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:57:39.281161   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:57:39.281178   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:57:39.281197   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:57:39.284664   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.285093   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:57:39.285126   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.285323   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:57:39.285486   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:57:39.285624   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:57:39.285731   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:57:39.291470   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0717 22:57:39.291955   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.292486   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.292509   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.292896   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.293409   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.293446   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.310134   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35331
	I0717 22:57:39.310626   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.311202   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.311227   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.311758   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.311947   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.314218   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:57:39.314495   53870 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:57:39.314506   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:57:39.314520   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:57:39.317813   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.321612   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:57:39.321659   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:57:39.321685   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.321771   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:57:39.321872   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:57:39.321963   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:57:39.410805   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:57:39.448115   53870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:57:39.468015   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:57:39.468044   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:57:39.510209   53870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:57:39.542977   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:57:39.543006   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:57:39.621799   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:57:39.621830   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:57:39.695813   53870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:57:39.820255   53870 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-332820" context rescaled to 1 replicas
	I0717 22:57:39.820293   53870 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.149 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:57:39.822441   53870 out.go:177] * Verifying Kubernetes components...
	I0717 22:57:39.824136   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:57:40.366843   53870 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0717 22:57:40.692359   53870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.244194312s)
	I0717 22:57:40.692412   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692417   53870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18217225s)
	I0717 22:57:40.692451   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692463   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.692427   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.692926   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.692941   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.692955   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.692961   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.692966   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692971   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.692977   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.692982   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692993   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.693346   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.693347   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.693360   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.693377   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.693379   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.693390   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.693391   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.693402   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.693727   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.695361   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.695382   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:41.360399   53870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.664534201s)
	I0717 22:57:41.360444   53870 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.536280934s)
	I0717 22:57:41.360477   53870 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-332820" to be "Ready" ...
	I0717 22:57:41.360484   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:41.360603   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:41.360912   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:41.360959   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:41.360976   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:41.360986   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:41.361000   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:41.361267   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:41.361323   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:41.361335   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:41.361350   53870 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-332820"
	I0717 22:57:41.364209   53870 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 22:57:39.128970   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:41.129335   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:41.365698   53870 addons.go:502] enable addons completed in 2.143322329s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 22:57:41.370307   53870 node_ready.go:49] node "old-k8s-version-332820" has status "Ready":"True"
	I0717 22:57:41.370334   53870 node_ready.go:38] duration metric: took 9.838563ms waiting for node "old-k8s-version-332820" to be "Ready" ...
	I0717 22:57:41.370345   53870 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:57:41.477919   53870 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:38.229186   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:40.229347   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:42.730552   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:43.627986   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:46.126930   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:43.515865   53870 pod_ready.go:102] pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:44.011451   53870 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-pjn9n" not found
	I0717 22:57:44.011475   53870 pod_ready.go:81] duration metric: took 2.533523466s waiting for pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace to be "Ready" ...
	E0717 22:57:44.011483   53870 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-pjn9n" not found
	I0717 22:57:44.011490   53870 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:46.023775   53870 pod_ready.go:102] pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:45.229105   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:47.727715   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:48.128141   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:50.628216   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:48.523241   53870 pod_ready.go:102] pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:50.024098   53870 pod_ready.go:92] pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:50.024118   53870 pod_ready.go:81] duration metric: took 6.012622912s waiting for pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:50.024129   53870 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dpnlw" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:50.029960   53870 pod_ready.go:92] pod "kube-proxy-dpnlw" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:50.029976   53870 pod_ready.go:81] duration metric: took 5.842404ms waiting for pod "kube-proxy-dpnlw" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:50.029985   53870 pod_ready.go:38] duration metric: took 8.659630061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:57:50.029998   53870 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:57:50.030036   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:57:50.046609   53870 api_server.go:72] duration metric: took 10.226287152s to wait for apiserver process to appear ...
	I0717 22:57:50.046634   53870 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:57:50.046654   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:57:50.053143   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 200:
	ok
	I0717 22:57:50.054242   53870 api_server.go:141] control plane version: v1.16.0
	I0717 22:57:50.054259   53870 api_server.go:131] duration metric: took 7.618888ms to wait for apiserver health ...
	I0717 22:57:50.054265   53870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:57:50.059517   53870 system_pods.go:59] 4 kube-system pods found
	I0717 22:57:50.059537   53870 system_pods.go:61] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.059542   53870 system_pods.go:61] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.059550   53870 system_pods.go:61] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.059559   53870 system_pods.go:61] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.059567   53870 system_pods.go:74] duration metric: took 5.296559ms to wait for pod list to return data ...
	I0717 22:57:50.059575   53870 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:57:50.062619   53870 default_sa.go:45] found service account: "default"
	I0717 22:57:50.062636   53870 default_sa.go:55] duration metric: took 3.055001ms for default service account to be created ...
	I0717 22:57:50.062643   53870 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:57:50.066927   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:50.066960   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.066969   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.066978   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.066987   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.067003   53870 retry.go:31] will retry after 260.087226ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:50.331854   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:50.331881   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.331886   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.331893   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.331899   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.331914   53870 retry.go:31] will retry after 352.733578ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:50.689437   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:50.689470   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.689478   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.689489   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.689497   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.689531   53870 retry.go:31] will retry after 448.974584ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:51.144027   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:51.144052   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:51.144057   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:51.144064   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:51.144072   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:51.144084   53870 retry.go:31] will retry after 388.759143ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:51.538649   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:51.538681   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:51.538690   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:51.538701   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:51.538709   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:51.538726   53870 retry.go:31] will retry after 516.772578ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:52.061223   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:52.061251   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:52.061257   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:52.061264   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:52.061270   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:52.061284   53870 retry.go:31] will retry after 640.645684ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:52.706812   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:52.706841   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:52.706848   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:52.706857   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:52.706865   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:52.706881   53870 retry.go:31] will retry after 800.353439ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:49.728135   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:51.729859   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:53.128948   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:55.628153   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:53.512660   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:53.512702   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:53.512710   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:53.512720   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:53.512729   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:53.512746   53870 retry.go:31] will retry after 1.135974065s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:54.653983   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:54.654008   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:54.654013   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:54.654021   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:54.654027   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:54.654040   53870 retry.go:31] will retry after 1.807970353s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:56.466658   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:56.466685   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:56.466690   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:56.466697   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:56.466703   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:56.466717   53870 retry.go:31] will retry after 1.738235237s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:53.729966   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:56.229195   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:58.130852   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:00.627290   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:58.210259   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:58.210286   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:58.210291   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:58.210298   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:58.210304   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:58.210318   53870 retry.go:31] will retry after 2.588058955s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:00.805164   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:00.805189   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:00.805195   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:00.805204   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:00.805212   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:00.805229   53870 retry.go:31] will retry after 2.395095199s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:58.230452   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:00.730302   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:02.627408   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:05.127023   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:03.205614   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:03.205641   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:03.205646   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:03.205654   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:03.205661   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:03.205673   53870 retry.go:31] will retry after 3.552797061s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:06.765112   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:06.765169   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:06.765189   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:06.765202   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:06.765211   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:06.765229   53870 retry.go:31] will retry after 3.62510644s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:03.229254   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:05.729500   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:07.627727   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:10.127545   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:10.396156   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:10.396185   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:10.396193   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:10.396202   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:10.396210   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:10.396234   53870 retry.go:31] will retry after 7.05504218s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:08.230115   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:10.729252   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:12.729814   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:12.627688   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:14.629102   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:17.126975   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:17.458031   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:17.458055   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:17.458060   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:17.458067   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:17.458072   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:17.458085   53870 retry.go:31] will retry after 7.079137896s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:15.228577   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:17.229657   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:19.127827   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:21.627879   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:19.733111   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:22.229170   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:24.128551   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:26.627380   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:24.542750   53870 system_pods.go:86] 5 kube-system pods found
	I0717 22:58:24.542779   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:24.542785   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:58:24.542789   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:24.542796   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:24.542801   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:24.542814   53870 retry.go:31] will retry after 10.245831604s: missing components: etcd, kube-apiserver, kube-scheduler
	I0717 22:58:24.729548   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:27.228785   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:28.627425   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:30.627791   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:29.728922   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:31.729450   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:32.628481   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:35.127509   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:37.128620   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:34.794623   53870 system_pods.go:86] 6 kube-system pods found
	I0717 22:58:34.794652   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:34.794658   53870 system_pods.go:89] "kube-apiserver-old-k8s-version-332820" [a92ec810-2496-4702-96cb-b99972aa0907] Running
	I0717 22:58:34.794662   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:58:34.794666   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:34.794673   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:34.794678   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:34.794692   53870 retry.go:31] will retry after 13.54688256s: missing components: etcd, kube-scheduler
	I0717 22:58:33.732071   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:36.230099   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:39.627130   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:41.628484   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:38.230167   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:40.728553   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:42.730438   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:44.129730   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:46.130222   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:45.228042   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:47.230684   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:48.627207   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:51.127809   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:48.348380   53870 system_pods.go:86] 8 kube-system pods found
	I0717 22:58:48.348409   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:48.348415   53870 system_pods.go:89] "etcd-old-k8s-version-332820" [2182326c-a489-44f6-a2bb-4d238d500cd4] Pending
	I0717 22:58:48.348419   53870 system_pods.go:89] "kube-apiserver-old-k8s-version-332820" [a92ec810-2496-4702-96cb-b99972aa0907] Running
	I0717 22:58:48.348424   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:58:48.348429   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:48.348433   53870 system_pods.go:89] "kube-scheduler-old-k8s-version-332820" [6145ebf3-1505-4eee-be83-b473b2d6eb16] Running
	I0717 22:58:48.348440   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:48.348448   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:48.348460   53870 retry.go:31] will retry after 11.748298579s: missing components: etcd
	I0717 22:58:49.730893   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:51.731624   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:53.131814   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:55.628315   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:54.229398   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:56.232954   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:00.104576   53870 system_pods.go:86] 8 kube-system pods found
	I0717 22:59:00.104603   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:59:00.104609   53870 system_pods.go:89] "etcd-old-k8s-version-332820" [2182326c-a489-44f6-a2bb-4d238d500cd4] Running
	I0717 22:59:00.104613   53870 system_pods.go:89] "kube-apiserver-old-k8s-version-332820" [a92ec810-2496-4702-96cb-b99972aa0907] Running
	I0717 22:59:00.104618   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:59:00.104622   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:59:00.104626   53870 system_pods.go:89] "kube-scheduler-old-k8s-version-332820" [6145ebf3-1505-4eee-be83-b473b2d6eb16] Running
	I0717 22:59:00.104632   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:59:00.104638   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:59:00.104646   53870 system_pods.go:126] duration metric: took 1m10.041998574s to wait for k8s-apps to be running ...
	I0717 22:59:00.104654   53870 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:59:00.104712   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:59:00.127311   53870 system_svc.go:56] duration metric: took 22.647393ms WaitForService to wait for kubelet.
	I0717 22:59:00.127340   53870 kubeadm.go:581] duration metric: took 1m20.307024254s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:59:00.127365   53870 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:59:00.131417   53870 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:59:00.131440   53870 node_conditions.go:123] node cpu capacity is 2
	I0717 22:59:00.131451   53870 node_conditions.go:105] duration metric: took 4.081643ms to run NodePressure ...
	I0717 22:59:00.131462   53870 start.go:228] waiting for startup goroutines ...
	I0717 22:59:00.131468   53870 start.go:233] waiting for cluster config update ...
	I0717 22:59:00.131478   53870 start.go:242] writing updated cluster config ...
	I0717 22:59:00.131776   53870 ssh_runner.go:195] Run: rm -f paused
	I0717 22:59:00.183048   53870 start.go:578] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0717 22:59:00.184945   53870 out.go:177] 
	W0717 22:59:00.186221   53870 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0717 22:59:00.187477   53870 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0717 22:59:00.188679   53870 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-332820" cluster and "default" namespace by default
	I0717 22:58:57.628894   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:59.629684   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:02.128694   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:58.730891   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:00.731091   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:04.627812   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:06.628434   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:03.230847   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:05.728807   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:07.728897   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:08.630065   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:11.128988   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:09.729866   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:12.229160   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:13.627995   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:16.128000   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:14.728745   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:16.733743   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:18.131709   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:20.628704   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:19.234979   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:21.730483   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:22.629821   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:25.127417   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:27.127827   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:24.229123   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:26.728729   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:29.629594   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:32.126711   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:28.729318   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:30.729924   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:32.731713   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:34.627629   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:37.128939   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:35.228008   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:37.233675   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:39.628990   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:41.629614   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:39.729052   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:41.730060   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:44.127514   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:46.128048   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:44.228115   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:46.229857   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:48.128761   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:50.631119   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:48.728917   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:50.730222   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:52.731295   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:53.127276   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:55.127950   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:57.128481   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:55.228655   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:57.228813   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:59.626761   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:01.628045   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:59.229493   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:01.230143   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:04.127371   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:06.128098   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:03.728770   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:06.228708   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:08.128197   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:10.626883   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:08.229060   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:10.727573   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:12.730410   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:12.628273   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:14.629361   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:17.127148   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:13.822400   54248 pod_ready.go:81] duration metric: took 4m0.000761499s waiting for pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace to be "Ready" ...
	E0717 23:00:13.822430   54248 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 23:00:13.822438   54248 pod_ready.go:38] duration metric: took 4m2.778910042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 23:00:13.822455   54248 api_server.go:52] waiting for apiserver process to appear ...
	I0717 23:00:13.822482   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:13.822546   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:13.868846   54248 cri.go:89] found id: "50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:13.868873   54248 cri.go:89] found id: ""
	I0717 23:00:13.868884   54248 logs.go:284] 1 containers: [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c]
	I0717 23:00:13.868951   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.873997   54248 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:13.874077   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:13.904386   54248 cri.go:89] found id: "e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:13.904415   54248 cri.go:89] found id: ""
	I0717 23:00:13.904425   54248 logs.go:284] 1 containers: [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2]
	I0717 23:00:13.904486   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.909075   54248 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:13.909127   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:13.940628   54248 cri.go:89] found id: "828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:13.940657   54248 cri.go:89] found id: ""
	I0717 23:00:13.940667   54248 logs.go:284] 1 containers: [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594]
	I0717 23:00:13.940721   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.945076   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:13.945132   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:13.976589   54248 cri.go:89] found id: "a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:13.976612   54248 cri.go:89] found id: ""
	I0717 23:00:13.976621   54248 logs.go:284] 1 containers: [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e]
	I0717 23:00:13.976684   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.981163   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:13.981231   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:14.018277   54248 cri.go:89] found id: "5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:14.018298   54248 cri.go:89] found id: ""
	I0717 23:00:14.018308   54248 logs.go:284] 1 containers: [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea]
	I0717 23:00:14.018370   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:14.022494   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:14.022557   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:14.055302   54248 cri.go:89] found id: "0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:14.055327   54248 cri.go:89] found id: ""
	I0717 23:00:14.055336   54248 logs.go:284] 1 containers: [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135]
	I0717 23:00:14.055388   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:14.059980   54248 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:14.060041   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:14.092467   54248 cri.go:89] found id: ""
	I0717 23:00:14.092495   54248 logs.go:284] 0 containers: []
	W0717 23:00:14.092505   54248 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:14.092512   54248 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:14.092570   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:14.127348   54248 cri.go:89] found id: "9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:14.127370   54248 cri.go:89] found id: ""
	I0717 23:00:14.127383   54248 logs.go:284] 1 containers: [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87]
	I0717 23:00:14.127438   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:14.132646   54248 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:14.132673   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:14.147882   54248 logs.go:123] Gathering logs for kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] ...
	I0717 23:00:14.147911   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:14.198417   54248 logs.go:123] Gathering logs for etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] ...
	I0717 23:00:14.198466   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:14.244734   54248 logs.go:123] Gathering logs for kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] ...
	I0717 23:00:14.244775   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:14.287920   54248 logs.go:123] Gathering logs for storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] ...
	I0717 23:00:14.287956   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:14.333785   54248 logs.go:123] Gathering logs for container status ...
	I0717 23:00:14.333820   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:14.378892   54248 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:14.378930   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 23:00:14.482292   54248 logs.go:123] Gathering logs for coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] ...
	I0717 23:00:14.482332   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:14.525418   54248 logs.go:123] Gathering logs for kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] ...
	I0717 23:00:14.525445   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:14.562013   54248 logs.go:123] Gathering logs for kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] ...
	I0717 23:00:14.562050   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:14.609917   54248 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:14.609955   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:15.088465   54248 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:15.088502   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:17.743963   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:00:17.761437   54248 api_server.go:72] duration metric: took 4m9.176341685s to wait for apiserver process to appear ...
	I0717 23:00:17.761464   54248 api_server.go:88] waiting for apiserver healthz status ...
	I0717 23:00:17.761499   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:17.761569   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:17.796097   54248 cri.go:89] found id: "50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:17.796126   54248 cri.go:89] found id: ""
	I0717 23:00:17.796136   54248 logs.go:284] 1 containers: [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c]
	I0717 23:00:17.796194   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.800256   54248 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:17.800318   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:17.830519   54248 cri.go:89] found id: "e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:17.830540   54248 cri.go:89] found id: ""
	I0717 23:00:17.830549   54248 logs.go:284] 1 containers: [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2]
	I0717 23:00:17.830597   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.835086   54248 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:17.835158   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:17.869787   54248 cri.go:89] found id: "828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:17.869810   54248 cri.go:89] found id: ""
	I0717 23:00:17.869817   54248 logs.go:284] 1 containers: [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594]
	I0717 23:00:17.869865   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.874977   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:17.875042   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:17.906026   54248 cri.go:89] found id: "a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:17.906060   54248 cri.go:89] found id: ""
	I0717 23:00:17.906070   54248 logs.go:284] 1 containers: [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e]
	I0717 23:00:17.906130   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.912549   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:17.912619   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:17.945804   54248 cri.go:89] found id: "5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:17.945832   54248 cri.go:89] found id: ""
	I0717 23:00:17.945842   54248 logs.go:284] 1 containers: [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea]
	I0717 23:00:17.945892   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.950115   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:17.950170   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:17.980790   54248 cri.go:89] found id: "0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:17.980816   54248 cri.go:89] found id: ""
	I0717 23:00:17.980825   54248 logs.go:284] 1 containers: [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135]
	I0717 23:00:17.980893   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:19.127901   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:21.628419   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:17.985352   54248 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:17.987262   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:18.019763   54248 cri.go:89] found id: ""
	I0717 23:00:18.019794   54248 logs.go:284] 0 containers: []
	W0717 23:00:18.019804   54248 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:18.019812   54248 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:18.019875   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:18.052106   54248 cri.go:89] found id: "9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:18.052135   54248 cri.go:89] found id: ""
	I0717 23:00:18.052144   54248 logs.go:284] 1 containers: [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87]
	I0717 23:00:18.052192   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:18.057066   54248 logs.go:123] Gathering logs for kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] ...
	I0717 23:00:18.057093   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:18.100637   54248 logs.go:123] Gathering logs for kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] ...
	I0717 23:00:18.100672   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:18.137149   54248 logs.go:123] Gathering logs for kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] ...
	I0717 23:00:18.137176   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:18.191633   54248 logs.go:123] Gathering logs for storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] ...
	I0717 23:00:18.191679   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:18.231765   54248 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:18.231798   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:18.250030   54248 logs.go:123] Gathering logs for kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] ...
	I0717 23:00:18.250061   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:18.312833   54248 logs.go:123] Gathering logs for etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] ...
	I0717 23:00:18.312881   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:18.357152   54248 logs.go:123] Gathering logs for coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] ...
	I0717 23:00:18.357190   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:18.388834   54248 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:18.388871   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 23:00:18.491866   54248 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:18.491898   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:18.638732   54248 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:18.638761   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:19.135753   54248 logs.go:123] Gathering logs for container status ...
	I0717 23:00:19.135788   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:21.678446   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 23:00:21.684484   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 200:
	ok
	I0717 23:00:21.686359   54248 api_server.go:141] control plane version: v1.27.3
	I0717 23:00:21.686385   54248 api_server.go:131] duration metric: took 3.924913504s to wait for apiserver health ...
	I0717 23:00:21.686395   54248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 23:00:21.686420   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:21.686476   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:21.720978   54248 cri.go:89] found id: "50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:21.721002   54248 cri.go:89] found id: ""
	I0717 23:00:21.721012   54248 logs.go:284] 1 containers: [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c]
	I0717 23:00:21.721070   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.726790   54248 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:21.726860   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:21.756975   54248 cri.go:89] found id: "e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:21.757001   54248 cri.go:89] found id: ""
	I0717 23:00:21.757011   54248 logs.go:284] 1 containers: [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2]
	I0717 23:00:21.757078   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.761611   54248 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:21.761681   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:21.795689   54248 cri.go:89] found id: "828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:21.795709   54248 cri.go:89] found id: ""
	I0717 23:00:21.795716   54248 logs.go:284] 1 containers: [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594]
	I0717 23:00:21.795767   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.800172   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:21.800236   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:21.833931   54248 cri.go:89] found id: "a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:21.833957   54248 cri.go:89] found id: ""
	I0717 23:00:21.833968   54248 logs.go:284] 1 containers: [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e]
	I0717 23:00:21.834026   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.839931   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:21.840003   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:21.874398   54248 cri.go:89] found id: "5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:21.874423   54248 cri.go:89] found id: ""
	I0717 23:00:21.874432   54248 logs.go:284] 1 containers: [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea]
	I0717 23:00:21.874489   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.878922   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:21.878986   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:21.913781   54248 cri.go:89] found id: "0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:21.913812   54248 cri.go:89] found id: ""
	I0717 23:00:21.913821   54248 logs.go:284] 1 containers: [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135]
	I0717 23:00:21.913877   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.918217   54248 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:21.918284   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:21.951832   54248 cri.go:89] found id: ""
	I0717 23:00:21.951859   54248 logs.go:284] 0 containers: []
	W0717 23:00:21.951869   54248 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:21.951876   54248 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:21.951925   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:21.987514   54248 cri.go:89] found id: "9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:21.987543   54248 cri.go:89] found id: ""
	I0717 23:00:21.987553   54248 logs.go:284] 1 containers: [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87]
	I0717 23:00:21.987617   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.992144   54248 logs.go:123] Gathering logs for container status ...
	I0717 23:00:21.992164   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:22.031685   54248 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:22.031715   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:22.046652   54248 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:22.046691   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:22.191164   54248 logs.go:123] Gathering logs for kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] ...
	I0717 23:00:22.191191   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:22.233174   54248 logs.go:123] Gathering logs for kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] ...
	I0717 23:00:22.233209   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:22.279246   54248 logs.go:123] Gathering logs for kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] ...
	I0717 23:00:22.279273   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:22.330534   54248 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:22.330565   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:22.837335   54248 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:22.837382   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 23:00:22.947015   54248 logs.go:123] Gathering logs for kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] ...
	I0717 23:00:22.947073   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:22.991731   54248 logs.go:123] Gathering logs for etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] ...
	I0717 23:00:22.991768   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:23.036115   54248 logs.go:123] Gathering logs for coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] ...
	I0717 23:00:23.036146   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:23.071825   54248 logs.go:123] Gathering logs for storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] ...
	I0717 23:00:23.071860   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:25.629247   54248 system_pods.go:59] 8 kube-system pods found
	I0717 23:00:25.629277   54248 system_pods.go:61] "coredns-5d78c9869d-6ljtn" [9488690c-8407-42ce-9938-039af0fa2c4d] Running
	I0717 23:00:25.629284   54248 system_pods.go:61] "etcd-embed-certs-571296" [e6e8b5d1-b1e7-4c3d-89d7-f44a2a6aff8b] Running
	I0717 23:00:25.629291   54248 system_pods.go:61] "kube-apiserver-embed-certs-571296" [3b5f5396-d325-445c-b3af-4cc7a506143e] Running
	I0717 23:00:25.629298   54248 system_pods.go:61] "kube-controller-manager-embed-certs-571296" [e113ffeb-97bd-4b0d-a432-b58be43b295b] Running
	I0717 23:00:25.629305   54248 system_pods.go:61] "kube-proxy-xjpds" [7c074cca-2579-4a54-bf55-77bba0fbcd34] Running
	I0717 23:00:25.629311   54248 system_pods.go:61] "kube-scheduler-embed-certs-571296" [1d192365-8c7b-4367-b4b0-fe9f6f5874af] Running
	I0717 23:00:25.629320   54248 system_pods.go:61] "metrics-server-74d5c6b9c-cknmm" [d1fb930f-518d-4ff4-94fe-7743ab55ecc6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:00:25.629331   54248 system_pods.go:61] "storage-provisioner" [1138e736-ef8d-4d24-86d5-cac3f58f0dd6] Running
	I0717 23:00:25.629339   54248 system_pods.go:74] duration metric: took 3.942938415s to wait for pod list to return data ...
	I0717 23:00:25.629347   54248 default_sa.go:34] waiting for default service account to be created ...
	I0717 23:00:25.632079   54248 default_sa.go:45] found service account: "default"
	I0717 23:00:25.632105   54248 default_sa.go:55] duration metric: took 2.751332ms for default service account to be created ...
	I0717 23:00:25.632114   54248 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 23:00:25.639267   54248 system_pods.go:86] 8 kube-system pods found
	I0717 23:00:25.639297   54248 system_pods.go:89] "coredns-5d78c9869d-6ljtn" [9488690c-8407-42ce-9938-039af0fa2c4d] Running
	I0717 23:00:25.639305   54248 system_pods.go:89] "etcd-embed-certs-571296" [e6e8b5d1-b1e7-4c3d-89d7-f44a2a6aff8b] Running
	I0717 23:00:25.639312   54248 system_pods.go:89] "kube-apiserver-embed-certs-571296" [3b5f5396-d325-445c-b3af-4cc7a506143e] Running
	I0717 23:00:25.639321   54248 system_pods.go:89] "kube-controller-manager-embed-certs-571296" [e113ffeb-97bd-4b0d-a432-b58be43b295b] Running
	I0717 23:00:25.639328   54248 system_pods.go:89] "kube-proxy-xjpds" [7c074cca-2579-4a54-bf55-77bba0fbcd34] Running
	I0717 23:00:25.639335   54248 system_pods.go:89] "kube-scheduler-embed-certs-571296" [1d192365-8c7b-4367-b4b0-fe9f6f5874af] Running
	I0717 23:00:25.639345   54248 system_pods.go:89] "metrics-server-74d5c6b9c-cknmm" [d1fb930f-518d-4ff4-94fe-7743ab55ecc6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:00:25.639353   54248 system_pods.go:89] "storage-provisioner" [1138e736-ef8d-4d24-86d5-cac3f58f0dd6] Running
	I0717 23:00:25.639362   54248 system_pods.go:126] duration metric: took 7.242476ms to wait for k8s-apps to be running ...
	I0717 23:00:25.639374   54248 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 23:00:25.639426   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:00:25.654026   54248 system_svc.go:56] duration metric: took 14.646361ms WaitForService to wait for kubelet.
	I0717 23:00:25.654049   54248 kubeadm.go:581] duration metric: took 4m17.068957071s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 23:00:25.654069   54248 node_conditions.go:102] verifying NodePressure condition ...
	I0717 23:00:25.658024   54248 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 23:00:25.658049   54248 node_conditions.go:123] node cpu capacity is 2
	I0717 23:00:25.658058   54248 node_conditions.go:105] duration metric: took 3.985859ms to run NodePressure ...
	I0717 23:00:25.658069   54248 start.go:228] waiting for startup goroutines ...
	I0717 23:00:25.658074   54248 start.go:233] waiting for cluster config update ...
	I0717 23:00:25.658083   54248 start.go:242] writing updated cluster config ...
	I0717 23:00:25.658335   54248 ssh_runner.go:195] Run: rm -f paused
	I0717 23:00:25.709576   54248 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 23:00:25.711805   54248 out.go:177] * Done! kubectl is now configured to use "embed-certs-571296" cluster and "default" namespace by default
	I0717 23:00:24.128252   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:26.130357   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:28.627639   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:30.627679   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:33.128946   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:35.627313   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:37.627998   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:40.128503   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:42.629092   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:45.126773   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:47.127774   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:49.128495   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:51.628994   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:54.127925   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:56.128908   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:56.725699   54649 pod_ready.go:81] duration metric: took 4m0.000620769s waiting for pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace to be "Ready" ...
	E0717 23:00:56.725751   54649 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 23:00:56.725769   54649 pod_ready.go:38] duration metric: took 4m2.87768055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 23:00:56.725797   54649 api_server.go:52] waiting for apiserver process to appear ...
	I0717 23:00:56.725839   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:56.725908   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:56.788229   54649 cri.go:89] found id: "45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:00:56.788257   54649 cri.go:89] found id: ""
	I0717 23:00:56.788266   54649 logs.go:284] 1 containers: [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0]
	I0717 23:00:56.788337   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.793647   54649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:56.793709   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:56.828720   54649 cri.go:89] found id: "7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:00:56.828741   54649 cri.go:89] found id: ""
	I0717 23:00:56.828748   54649 logs.go:284] 1 containers: [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab]
	I0717 23:00:56.828790   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.833266   54649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:56.833339   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:56.865377   54649 cri.go:89] found id: "30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:00:56.865407   54649 cri.go:89] found id: ""
	I0717 23:00:56.865416   54649 logs.go:284] 1 containers: [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223]
	I0717 23:00:56.865478   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.870881   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:56.870944   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:56.908871   54649 cri.go:89] found id: "4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:00:56.908891   54649 cri.go:89] found id: ""
	I0717 23:00:56.908899   54649 logs.go:284] 1 containers: [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad]
	I0717 23:00:56.908952   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.913121   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:56.913171   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:56.946752   54649 cri.go:89] found id: "a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:00:56.946797   54649 cri.go:89] found id: ""
	I0717 23:00:56.946806   54649 logs.go:284] 1 containers: [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524]
	I0717 23:00:56.946864   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.951141   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:56.951216   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:56.986967   54649 cri.go:89] found id: "7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:00:56.986987   54649 cri.go:89] found id: ""
	I0717 23:00:56.986996   54649 logs.go:284] 1 containers: [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10]
	I0717 23:00:56.987039   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.993578   54649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:56.993655   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:57.030468   54649 cri.go:89] found id: ""
	I0717 23:00:57.030491   54649 logs.go:284] 0 containers: []
	W0717 23:00:57.030498   54649 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:57.030503   54649 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:57.030548   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:57.070533   54649 cri.go:89] found id: "4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:00:57.070564   54649 cri.go:89] found id: ""
	I0717 23:00:57.070574   54649 logs.go:284] 1 containers: [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6]
	I0717 23:00:57.070632   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:57.075379   54649 logs.go:123] Gathering logs for container status ...
	I0717 23:00:57.075685   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:57.121312   54649 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:57.121343   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 23:00:57.222647   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:00:57.222960   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:00:57.251443   54649 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:57.251481   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:57.266213   54649 logs.go:123] Gathering logs for coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] ...
	I0717 23:00:57.266242   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:00:57.304032   54649 logs.go:123] Gathering logs for kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] ...
	I0717 23:00:57.304058   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:00:57.342839   54649 logs.go:123] Gathering logs for storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] ...
	I0717 23:00:57.342865   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:00:57.378086   54649 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:57.378118   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:57.893299   54649 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:57.893338   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:58.043526   54649 logs.go:123] Gathering logs for kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] ...
	I0717 23:00:58.043564   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:00:58.096422   54649 logs.go:123] Gathering logs for etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] ...
	I0717 23:00:58.096452   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:00:58.141423   54649 logs.go:123] Gathering logs for kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] ...
	I0717 23:00:58.141452   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:00:58.183755   54649 logs.go:123] Gathering logs for kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] ...
	I0717 23:00:58.183792   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:00:58.239385   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:00:58.239418   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0717 23:00:58.239479   54649 out.go:239] X Problems detected in kubelet:
	W0717 23:00:58.239506   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:00:58.239522   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:00:58.239527   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:00:58.239533   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:01:08.241689   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:01:08.259063   54649 api_server.go:72] duration metric: took 4m17.020334708s to wait for apiserver process to appear ...
	I0717 23:01:08.259090   54649 api_server.go:88] waiting for apiserver healthz status ...
	I0717 23:01:08.259125   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:01:08.259186   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:01:08.289063   54649 cri.go:89] found id: "45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:08.289080   54649 cri.go:89] found id: ""
	I0717 23:01:08.289088   54649 logs.go:284] 1 containers: [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0]
	I0717 23:01:08.289146   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.293604   54649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:01:08.293668   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:01:08.323866   54649 cri.go:89] found id: "7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:08.323889   54649 cri.go:89] found id: ""
	I0717 23:01:08.323899   54649 logs.go:284] 1 containers: [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab]
	I0717 23:01:08.324251   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.330335   54649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:01:08.330405   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:01:08.380361   54649 cri.go:89] found id: "30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:08.380387   54649 cri.go:89] found id: ""
	I0717 23:01:08.380399   54649 logs.go:284] 1 containers: [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223]
	I0717 23:01:08.380458   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.384547   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:01:08.384612   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:01:08.416767   54649 cri.go:89] found id: "4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:08.416787   54649 cri.go:89] found id: ""
	I0717 23:01:08.416793   54649 logs.go:284] 1 containers: [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad]
	I0717 23:01:08.416836   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.420982   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:01:08.421031   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:01:08.451034   54649 cri.go:89] found id: "a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:08.451064   54649 cri.go:89] found id: ""
	I0717 23:01:08.451074   54649 logs.go:284] 1 containers: [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524]
	I0717 23:01:08.451126   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.455015   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:01:08.455063   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:01:08.486539   54649 cri.go:89] found id: "7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:08.486560   54649 cri.go:89] found id: ""
	I0717 23:01:08.486567   54649 logs.go:284] 1 containers: [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10]
	I0717 23:01:08.486620   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.491106   54649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:01:08.491171   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:01:08.523068   54649 cri.go:89] found id: ""
	I0717 23:01:08.523099   54649 logs.go:284] 0 containers: []
	W0717 23:01:08.523109   54649 logs.go:286] No container was found matching "kindnet"
	I0717 23:01:08.523116   54649 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:01:08.523201   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:01:08.556090   54649 cri.go:89] found id: "4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:08.556116   54649 cri.go:89] found id: ""
	I0717 23:01:08.556125   54649 logs.go:284] 1 containers: [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6]
	I0717 23:01:08.556181   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.560278   54649 logs.go:123] Gathering logs for storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] ...
	I0717 23:01:08.560301   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:08.595021   54649 logs.go:123] Gathering logs for container status ...
	I0717 23:01:08.595052   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:01:08.640723   54649 logs.go:123] Gathering logs for dmesg ...
	I0717 23:01:08.640757   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:01:08.654641   54649 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:01:08.654679   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:01:08.789999   54649 logs.go:123] Gathering logs for kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] ...
	I0717 23:01:08.790026   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:08.837387   54649 logs.go:123] Gathering logs for coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] ...
	I0717 23:01:08.837420   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:08.871514   54649 logs.go:123] Gathering logs for kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] ...
	I0717 23:01:08.871565   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:08.911626   54649 logs.go:123] Gathering logs for kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] ...
	I0717 23:01:08.911657   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:08.961157   54649 logs.go:123] Gathering logs for kubelet ...
	I0717 23:01:08.961192   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 23:01:09.040804   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:09.040992   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:09.067178   54649 logs.go:123] Gathering logs for etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] ...
	I0717 23:01:09.067213   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:09.104138   54649 logs.go:123] Gathering logs for kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] ...
	I0717 23:01:09.104170   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:09.146623   54649 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:01:09.146653   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:01:09.681092   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:09.681128   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0717 23:01:09.681200   54649 out.go:239] X Problems detected in kubelet:
	W0717 23:01:09.681217   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:09.681229   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:09.681237   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:09.681244   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:01:19.682682   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 23:01:19.688102   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 200:
	ok
	I0717 23:01:19.689304   54649 api_server.go:141] control plane version: v1.27.3
	I0717 23:01:19.689323   54649 api_server.go:131] duration metric: took 11.430226781s to wait for apiserver health ...
	I0717 23:01:19.689330   54649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 23:01:19.689349   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:01:19.689393   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:01:19.731728   54649 cri.go:89] found id: "45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:19.731748   54649 cri.go:89] found id: ""
	I0717 23:01:19.731756   54649 logs.go:284] 1 containers: [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0]
	I0717 23:01:19.731807   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.737797   54649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:01:19.737857   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:01:19.776355   54649 cri.go:89] found id: "7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:19.776377   54649 cri.go:89] found id: ""
	I0717 23:01:19.776385   54649 logs.go:284] 1 containers: [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab]
	I0717 23:01:19.776438   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.780589   54649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:01:19.780645   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:01:19.810917   54649 cri.go:89] found id: "30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:19.810938   54649 cri.go:89] found id: ""
	I0717 23:01:19.810947   54649 logs.go:284] 1 containers: [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223]
	I0717 23:01:19.811001   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.815185   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:01:19.815252   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:01:19.852138   54649 cri.go:89] found id: "4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:19.852161   54649 cri.go:89] found id: ""
	I0717 23:01:19.852170   54649 logs.go:284] 1 containers: [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad]
	I0717 23:01:19.852225   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.856947   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:01:19.857012   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:01:19.893668   54649 cri.go:89] found id: "a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:19.893695   54649 cri.go:89] found id: ""
	I0717 23:01:19.893705   54649 logs.go:284] 1 containers: [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524]
	I0717 23:01:19.893763   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.897862   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:01:19.897915   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:01:19.935000   54649 cri.go:89] found id: "7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:19.935024   54649 cri.go:89] found id: ""
	I0717 23:01:19.935033   54649 logs.go:284] 1 containers: [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10]
	I0717 23:01:19.935097   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.939417   54649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:01:19.939487   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:01:19.971266   54649 cri.go:89] found id: ""
	I0717 23:01:19.971296   54649 logs.go:284] 0 containers: []
	W0717 23:01:19.971305   54649 logs.go:286] No container was found matching "kindnet"
	I0717 23:01:19.971313   54649 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:01:19.971374   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:01:20.007281   54649 cri.go:89] found id: "4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:20.007299   54649 cri.go:89] found id: ""
	I0717 23:01:20.007306   54649 logs.go:284] 1 containers: [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6]
	I0717 23:01:20.007351   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:20.011751   54649 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:01:20.011776   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:01:20.146025   54649 logs.go:123] Gathering logs for kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] ...
	I0717 23:01:20.146052   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:20.197984   54649 logs.go:123] Gathering logs for etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] ...
	I0717 23:01:20.198014   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:20.240729   54649 logs.go:123] Gathering logs for coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] ...
	I0717 23:01:20.240765   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:20.280904   54649 logs.go:123] Gathering logs for kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] ...
	I0717 23:01:20.280931   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:20.338648   54649 logs.go:123] Gathering logs for storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] ...
	I0717 23:01:20.338679   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:20.378549   54649 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:01:20.378586   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:01:20.858716   54649 logs.go:123] Gathering logs for kubelet ...
	I0717 23:01:20.858759   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 23:01:20.944347   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:20.944538   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:20.971487   54649 logs.go:123] Gathering logs for kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] ...
	I0717 23:01:20.971520   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:21.007705   54649 logs.go:123] Gathering logs for kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] ...
	I0717 23:01:21.007736   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:21.059674   54649 logs.go:123] Gathering logs for container status ...
	I0717 23:01:21.059703   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:01:21.095693   54649 logs.go:123] Gathering logs for dmesg ...
	I0717 23:01:21.095722   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:01:21.110247   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:21.110273   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0717 23:01:21.110336   54649 out.go:239] X Problems detected in kubelet:
	W0717 23:01:21.110354   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:21.110364   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:21.110371   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:21.110379   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:01:31.121237   54649 system_pods.go:59] 8 kube-system pods found
	I0717 23:01:31.121266   54649 system_pods.go:61] "coredns-5d78c9869d-rqcjj" [9f3bc4cf-fb20-413e-b367-27bcb997ab80] Running
	I0717 23:01:31.121272   54649 system_pods.go:61] "etcd-default-k8s-diff-port-504828" [1e432373-0f87-4cda-969e-492a8b534af0] Running
	I0717 23:01:31.121280   54649 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504828" [573bd1d1-09ff-40b5-9746-0b3fa3d51f08] Running
	I0717 23:01:31.121290   54649 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504828" [c6baeefc-57b7-4710-998c-0af932d2db14] Running
	I0717 23:01:31.121299   54649 system_pods.go:61] "kube-proxy-nmtc8" [1f8a0182-d1df-4609-86d1-7695a138e32f] Running
	I0717 23:01:31.121307   54649 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504828" [df487feb-f937-4832-ad65-38718d4325c5] Running
	I0717 23:01:31.121317   54649 system_pods.go:61] "metrics-server-74d5c6b9c-j8f2f" [328c892b-7402-480b-bc29-a316c8fb7b1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:01:31.121339   54649 system_pods.go:61] "storage-provisioner" [0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1] Running
	I0717 23:01:31.121347   54649 system_pods.go:74] duration metric: took 11.432011006s to wait for pod list to return data ...
	I0717 23:01:31.121357   54649 default_sa.go:34] waiting for default service account to be created ...
	I0717 23:01:31.124377   54649 default_sa.go:45] found service account: "default"
	I0717 23:01:31.124403   54649 default_sa.go:55] duration metric: took 3.036772ms for default service account to be created ...
	I0717 23:01:31.124413   54649 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 23:01:31.131080   54649 system_pods.go:86] 8 kube-system pods found
	I0717 23:01:31.131116   54649 system_pods.go:89] "coredns-5d78c9869d-rqcjj" [9f3bc4cf-fb20-413e-b367-27bcb997ab80] Running
	I0717 23:01:31.131125   54649 system_pods.go:89] "etcd-default-k8s-diff-port-504828" [1e432373-0f87-4cda-969e-492a8b534af0] Running
	I0717 23:01:31.131132   54649 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-504828" [573bd1d1-09ff-40b5-9746-0b3fa3d51f08] Running
	I0717 23:01:31.131140   54649 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-504828" [c6baeefc-57b7-4710-998c-0af932d2db14] Running
	I0717 23:01:31.131151   54649 system_pods.go:89] "kube-proxy-nmtc8" [1f8a0182-d1df-4609-86d1-7695a138e32f] Running
	I0717 23:01:31.131158   54649 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-504828" [df487feb-f937-4832-ad65-38718d4325c5] Running
	I0717 23:01:31.131182   54649 system_pods.go:89] "metrics-server-74d5c6b9c-j8f2f" [328c892b-7402-480b-bc29-a316c8fb7b1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:01:31.131190   54649 system_pods.go:89] "storage-provisioner" [0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1] Running
	I0717 23:01:31.131204   54649 system_pods.go:126] duration metric: took 6.785139ms to wait for k8s-apps to be running ...
	I0717 23:01:31.131211   54649 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 23:01:31.131260   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:01:31.150458   54649 system_svc.go:56] duration metric: took 19.234064ms WaitForService to wait for kubelet.
	I0717 23:01:31.150495   54649 kubeadm.go:581] duration metric: took 4m39.911769992s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 23:01:31.150523   54649 node_conditions.go:102] verifying NodePressure condition ...
	I0717 23:01:31.153677   54649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 23:01:31.153700   54649 node_conditions.go:123] node cpu capacity is 2
	I0717 23:01:31.153710   54649 node_conditions.go:105] duration metric: took 3.182344ms to run NodePressure ...
	I0717 23:01:31.153720   54649 start.go:228] waiting for startup goroutines ...
	I0717 23:01:31.153726   54649 start.go:233] waiting for cluster config update ...
	I0717 23:01:31.153737   54649 start.go:242] writing updated cluster config ...
	I0717 23:01:31.153995   54649 ssh_runner.go:195] Run: rm -f paused
	I0717 23:01:31.204028   54649 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 23:01:31.207280   54649 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-504828" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 22:50:21 UTC, ends at Mon 2023-07-17 23:09:27 UTC. --
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.197258557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e7de1ce8-e819-43d7-a546-d94593db0aba name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.343888896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=baa815db-f0fd-4196-ab11-2ab00bbccc60 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.344021068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=baa815db-f0fd-4196-ab11-2ab00bbccc60 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.344293426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=baa815db-f0fd-4196-ab11-2ab00bbccc60 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.382071746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=584ea034-77ac-4f1e-8bcf-5c795c0287a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.382137072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=584ea034-77ac-4f1e-8bcf-5c795c0287a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.382306622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=584ea034-77ac-4f1e-8bcf-5c795c0287a3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.426058859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=661bae55-12eb-4e2c-a05a-2a260096ebb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.426164375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=661bae55-12eb-4e2c-a05a-2a260096ebb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.426377464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=661bae55-12eb-4e2c-a05a-2a260096ebb6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.466240357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9657516-813c-4ece-9ef6-a7db15fd240f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.466332717Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a9657516-813c-4ece-9ef6-a7db15fd240f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.466501877Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a9657516-813c-4ece-9ef6-a7db15fd240f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.501963895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=30985998-7217-4dfe-a6d5-7ff3cb1541a8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.502067345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=30985998-7217-4dfe-a6d5-7ff3cb1541a8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.502230071Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=30985998-7217-4dfe-a6d5-7ff3cb1541a8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.548168227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2d7ec227-7c7a-462f-affa-6fc76f64bd5f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.548256860Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2d7ec227-7c7a-462f-affa-6fc76f64bd5f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.548471040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2d7ec227-7c7a-462f-affa-6fc76f64bd5f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.586492929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=65ee138f-ce2b-4e0b-90de-7952fdd331ad name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.586595255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=65ee138f-ce2b-4e0b-90de-7952fdd331ad name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.586840965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=65ee138f-ce2b-4e0b-90de-7952fdd331ad name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.617128767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dcf8eac5-d9c7-49d9-805f-1047f6f03b51 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.617222498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dcf8eac5-d9c7-49d9-805f-1047f6f03b51 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:09:27 embed-certs-571296 crio[726]: time="2023-07-17 23:09:27.617381831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dcf8eac5-d9c7-49d9-805f-1047f6f03b51 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	9c19e84545ef3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   a1fdf463e93d4
	5768c3f6c2960       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   13 minutes ago      Running             kube-proxy                0                   d2124bc3946dd
	828166d2e045a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   af4c08b491f1a
	e899989fdd5cd       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   13 minutes ago      Running             etcd                      2                   dcb3735fb7b70
	a818326b40b2f       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   13 minutes ago      Running             kube-scheduler            2                   3270b4be80d3c
	0272ac3812d33       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   13 minutes ago      Running             kube-controller-manager   2                   ff813770ad9d8
	50fe7f6b0feef       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   13 minutes ago      Running             kube-apiserver            2                   a5d980dbc6cbe
	
	* 
	* ==> coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37070 - 25273 "HINFO IN 7417818828265277478.6923989445757744740. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017827035s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-571296
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-571296
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=embed-certs-571296
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_55_55_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:55:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-571296
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 23:09:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 23:06:28 +0000   Mon, 17 Jul 2023 22:55:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 23:06:28 +0000   Mon, 17 Jul 2023 22:55:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 23:06:28 +0000   Mon, 17 Jul 2023 22:55:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 23:06:28 +0000   Mon, 17 Jul 2023 22:56:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.179
	  Hostname:    embed-certs-571296
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f599878ef444243a720c3dbd0b0a67a
	  System UUID:                5f599878-ef44-4243-a720-c3dbd0b0a67a
	  Boot ID:                    305230bc-a94e-4ef4-82b6-56fed7cc0a51
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-6ljtn                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-embed-certs-571296                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-embed-certs-571296             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-571296    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-xjpds                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-571296             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-74d5c6b9c-cknmm                100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-571296 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-571296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-571296 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node embed-certs-571296 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node embed-certs-571296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node embed-certs-571296 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node embed-certs-571296 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node embed-certs-571296 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-571296 event: Registered Node embed-certs-571296 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 22:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070530] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.346961] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.481208] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142396] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.432217] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.000505] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.115036] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.141145] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.116206] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.210624] systemd-fstab-generator[709]: Ignoring "noauto" for root device
	[ +17.116418] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[Jul17 22:51] kauditd_printk_skb: 29 callbacks suppressed
	[ +25.214063] hrtimer: interrupt took 6366582 ns
	[Jul17 22:55] systemd-fstab-generator[3542]: Ignoring "noauto" for root device
	[  +9.833746] systemd-fstab-generator[3861]: Ignoring "noauto" for root device
	[Jul17 22:56] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] <==
	* {"level":"info","ts":"2023-07-17T22:55:48.631Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.61.179:2380"}
	{"level":"info","ts":"2023-07-17T22:55:48.639Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.61.179:2380"}
	{"level":"info","ts":"2023-07-17T22:55:48.639Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"564c1a3a64ab9e7c","initial-advertise-peer-urls":["https://192.168.61.179:2380"],"listen-peer-urls":["https://192.168.61.179:2380"],"advertise-client-urls":["https://192.168.61.179:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.179:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T22:55:48.640Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T22:55:49.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"564c1a3a64ab9e7c is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-17T22:55:49.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"564c1a3a64ab9e7c became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T22:55:49.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"564c1a3a64ab9e7c received MsgPreVoteResp from 564c1a3a64ab9e7c at term 1"}
	{"level":"info","ts":"2023-07-17T22:55:49.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"564c1a3a64ab9e7c became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T22:55:49.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"564c1a3a64ab9e7c received MsgVoteResp from 564c1a3a64ab9e7c at term 2"}
	{"level":"info","ts":"2023-07-17T22:55:49.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"564c1a3a64ab9e7c became leader at term 2"}
	{"level":"info","ts":"2023-07-17T22:55:49.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 564c1a3a64ab9e7c elected leader 564c1a3a64ab9e7c at term 2"}
	{"level":"info","ts":"2023-07-17T22:55:49.587Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:55:49.589Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"564c1a3a64ab9e7c","local-member-attributes":"{Name:embed-certs-571296 ClientURLs:[https://192.168.61.179:2379]}","request-path":"/0/members/564c1a3a64ab9e7c/attributes","cluster-id":"be5c98cbd915062","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:55:49.589Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:55:49.589Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"be5c98cbd915062","local-member-id":"564c1a3a64ab9e7c","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:55:49.589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:55:49.589Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:55:49.589Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:55:49.590Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T22:55:49.591Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.179:2379"}
	{"level":"info","ts":"2023-07-17T22:55:49.591Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:55:49.591Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T23:05:49.626Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":725}
	{"level":"info","ts":"2023-07-17T23:05:49.634Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":725,"took":"6.973134ms","hash":3749277493}
	{"level":"info","ts":"2023-07-17T23:05:49.636Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3749277493,"revision":725,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  23:09:27 up 19 min,  0 users,  load average: 0.33, 0.43, 0.32
	Linux embed-certs-571296 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] <==
	* I0717 23:05:52.272130       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:05:52.272011       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:05:52.272245       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:05:52.273543       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:06:51.159809       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.165.138:443: connect: connection refused
	I0717 23:06:51.159926       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 23:06:52.273400       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:06:52.273473       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:06:52.273481       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:06:52.273755       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:06:52.273814       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:06:52.275043       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:07:51.160525       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.165.138:443: connect: connection refused
	I0717 23:07:51.160927       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 23:08:51.160293       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.165.138:443: connect: connection refused
	I0717 23:08:51.160354       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 23:08:52.273743       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:08:52.273906       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:08:52.273944       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:08:52.276092       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:08:52.276341       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:08:52.276382       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] <==
	* W0717 23:03:07.584485       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:03:37.072055       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:03:37.596275       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:04:07.078393       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:04:07.606001       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:04:37.083587       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:04:37.619218       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:05:07.092421       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:05:07.627889       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:05:37.098560       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:05:37.639295       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:06:07.105833       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:06:07.648884       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:06:37.111339       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:06:37.666881       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:07:07.119753       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:07:07.676435       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:07:37.126081       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:07:37.687075       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:08:07.133069       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:08:07.695423       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:08:37.138869       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:08:37.708808       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:09:07.155410       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:09:07.718859       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] <==
	* I0717 22:56:12.753578       1 node.go:141] Successfully retrieved node IP: 192.168.61.179
	I0717 22:56:12.754254       1 server_others.go:110] "Detected node IP" address="192.168.61.179"
	I0717 22:56:12.754470       1 server_others.go:554] "Using iptables proxy"
	I0717 22:56:12.797196       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 22:56:12.797282       1 server_others.go:192] "Using iptables Proxier"
	I0717 22:56:12.798052       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 22:56:12.799347       1 server.go:658] "Version info" version="v1.27.3"
	I0717 22:56:12.799411       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:56:12.803398       1 config.go:188] "Starting service config controller"
	I0717 22:56:12.805010       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 22:56:12.805347       1 config.go:97] "Starting endpoint slice config controller"
	I0717 22:56:12.805387       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 22:56:12.811782       1 config.go:315] "Starting node config controller"
	I0717 22:56:12.812037       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 22:56:12.905595       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 22:56:12.905753       1 shared_informer.go:318] Caches are synced for service config
	I0717 22:56:12.912361       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] <==
	* W0717 22:55:51.345847       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:55:51.346746       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 22:55:52.169072       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:55:52.169237       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 22:55:52.279187       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:55:52.279279       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 22:55:52.340312       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:55:52.340426       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 22:55:52.363615       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:55:52.363783       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 22:55:52.383899       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 22:55:52.383992       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 22:55:52.420072       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 22:55:52.420148       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 22:55:52.460013       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 22:55:52.460140       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 22:55:52.477853       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:55:52.477965       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 22:55:52.498127       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 22:55:52.498234       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 22:55:52.547416       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 22:55:52.547502       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 22:55:52.847567       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 22:55:52.847937       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 22:55:54.696321       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 22:50:21 UTC, ends at Mon 2023-07-17 23:09:28 UTC. --
	Jul 17 23:06:55 embed-certs-571296 kubelet[3869]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:06:55 embed-certs-571296 kubelet[3869]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:06:56 embed-certs-571296 kubelet[3869]: E0717 23:06:56.213150    3869 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 23:06:56 embed-certs-571296 kubelet[3869]: E0717 23:06:56.213216    3869 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 23:06:56 embed-certs-571296 kubelet[3869]: E0717 23:06:56.213406    3869 kuberuntime_manager.go:1212] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7hrfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod metrics-server-74d5c6b9c-cknmm_kube-system(d1fb930f-518d-4ff4-94fe-7743ab55ecc6): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 23:06:56 embed-certs-571296 kubelet[3869]: E0717 23:06:56.213455    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:07:08 embed-certs-571296 kubelet[3869]: E0717 23:07:08.193530    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:07:23 embed-certs-571296 kubelet[3869]: E0717 23:07:23.194877    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:07:38 embed-certs-571296 kubelet[3869]: E0717 23:07:38.193610    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:07:50 embed-certs-571296 kubelet[3869]: E0717 23:07:50.194090    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:07:55 embed-certs-571296 kubelet[3869]: E0717 23:07:55.333556    3869 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 23:07:55 embed-certs-571296 kubelet[3869]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:07:55 embed-certs-571296 kubelet[3869]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:07:55 embed-certs-571296 kubelet[3869]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:08:04 embed-certs-571296 kubelet[3869]: E0717 23:08:04.194092    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:08:16 embed-certs-571296 kubelet[3869]: E0717 23:08:16.193282    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:08:28 embed-certs-571296 kubelet[3869]: E0717 23:08:28.193423    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:08:42 embed-certs-571296 kubelet[3869]: E0717 23:08:42.194266    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:08:55 embed-certs-571296 kubelet[3869]: E0717 23:08:55.333157    3869 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 23:08:55 embed-certs-571296 kubelet[3869]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:08:55 embed-certs-571296 kubelet[3869]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:08:55 embed-certs-571296 kubelet[3869]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:08:57 embed-certs-571296 kubelet[3869]: E0717 23:08:57.195476    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:09:12 embed-certs-571296 kubelet[3869]: E0717 23:09:12.194030    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:09:25 embed-certs-571296 kubelet[3869]: E0717 23:09:25.196096    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	
	* 
	* ==> storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] <==
	* I0717 22:56:12.671565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 22:56:12.687180       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 22:56:12.687256       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 22:56:12.704619       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 22:56:12.705995       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-571296_9e5d8b9a-c6b9-4f0e-bad7-e5fc4765aad8!
	I0717 22:56:12.707640       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4fcc2d2a-fa66-4f1f-bc39-b898ddd2283a", APIVersion:"v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-571296_9e5d8b9a-c6b9-4f0e-bad7-e5fc4765aad8 became leader
	I0717 22:56:12.807132       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-571296_9e5d8b9a-c6b9-4f0e-bad7-e5fc4765aad8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-571296 -n embed-certs-571296
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-571296 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-cknmm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-571296 describe pod metrics-server-74d5c6b9c-cknmm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-571296 describe pod metrics-server-74d5c6b9c-cknmm: exit status 1 (73.416809ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-cknmm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-571296 describe pod metrics-server-74d5c6b9c-cknmm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 23:02:28.101068   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 23:03:11.892210   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 23:03:51.148731   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828
E0717 23:10:31.747261   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-07-17 23:10:31.764385473 +0000 UTC m=+5395.554171473
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-504828 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-504828 logs -n 25: (1.111261497s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-366864                              | cert-expiration-366864       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| delete  | -p                                                     | disable-driver-mounts-615088 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	|         | disable-driver-mounts-615088                           |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-431736 sudo                            | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-431736                                 | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-332820        | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-571296            | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-935524             | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-504828  | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-332820             | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-571296                 | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 23:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-935524                  | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504828       | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 22:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 23:01 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 23:10 UTC | 17 Jul 23 23:10 UTC |
	| start   | -p newest-cni-670356 --memory=2200 --alsologtostderr   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:10 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 23:10:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 23:10:08.247050   59156 out.go:296] Setting OutFile to fd 1 ...
	I0717 23:10:08.247201   59156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:10:08.247215   59156 out.go:309] Setting ErrFile to fd 2...
	I0717 23:10:08.247222   59156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:10:08.247414   59156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 23:10:08.247977   59156 out.go:303] Setting JSON to false
	I0717 23:10:08.248971   59156 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10360,"bootTime":1689625048,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 23:10:08.249039   59156 start.go:138] virtualization: kvm guest
	I0717 23:10:08.251715   59156 out.go:177] * [newest-cni-670356] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 23:10:08.253548   59156 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 23:10:08.253490   59156 notify.go:220] Checking for updates...
	I0717 23:10:08.255008   59156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 23:10:08.256297   59156 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 23:10:08.257685   59156 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 23:10:08.259030   59156 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 23:10:08.260529   59156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 23:10:08.262380   59156 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:10:08.262475   59156 config.go:182] Loaded profile config "embed-certs-571296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:10:08.262557   59156 config.go:182] Loaded profile config "no-preload-935524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:10:08.262640   59156 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 23:10:08.299924   59156 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 23:10:08.301235   59156 start.go:298] selected driver: kvm2
	I0717 23:10:08.301248   59156 start.go:880] validating driver "kvm2" against <nil>
	I0717 23:10:08.301268   59156 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 23:10:08.302109   59156 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 23:10:08.302182   59156 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 23:10:08.316608   59156 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 23:10:08.316650   59156 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0717 23:10:08.316672   59156 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0717 23:10:08.316861   59156 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 23:10:08.316884   59156 cni.go:84] Creating CNI manager for ""
	I0717 23:10:08.316896   59156 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 23:10:08.316904   59156 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 23:10:08.316917   59156 start_flags.go:319] config:
	{Name:newest-cni-670356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-670356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}

                                                
                                                
	I0717 23:10:08.317041   59156 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 23:10:08.319266   59156 out.go:177] * Starting control plane node newest-cni-670356 in cluster newest-cni-670356
	I0717 23:10:08.320561   59156 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 23:10:08.320595   59156 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 23:10:08.320612   59156 cache.go:57] Caching tarball of preloaded images
	I0717 23:10:08.320698   59156 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 23:10:08.320708   59156 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 23:10:08.320859   59156 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/newest-cni-670356/config.json ...
	I0717 23:10:08.320882   59156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/newest-cni-670356/config.json: {Name:mk28622c0d60a87431580f98e1de245f1b8149ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:10:08.321004   59156 start.go:365] acquiring machines lock for newest-cni-670356: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 23:10:08.321031   59156 start.go:369] acquired machines lock for "newest-cni-670356" in 14.905µs
	I0717 23:10:08.321046   59156 start.go:93] Provisioning new machine with config: &{Name:newest-cni-670356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni
-670356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 23:10:08.321112   59156 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 23:10:08.322876   59156 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 23:10:08.322993   59156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 23:10:08.323033   59156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 23:10:08.337438   59156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41785
	I0717 23:10:08.337896   59156 main.go:141] libmachine: () Calling .GetVersion
	I0717 23:10:08.338500   59156 main.go:141] libmachine: Using API Version  1
	I0717 23:10:08.338521   59156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 23:10:08.338917   59156 main.go:141] libmachine: () Calling .GetMachineName
	I0717 23:10:08.339141   59156 main.go:141] libmachine: (newest-cni-670356) Calling .GetMachineName
	I0717 23:10:08.339273   59156 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:10:08.339420   59156 start.go:159] libmachine.API.Create for "newest-cni-670356" (driver="kvm2")
	I0717 23:10:08.339448   59156 client.go:168] LocalClient.Create starting
	I0717 23:10:08.339476   59156 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem
	I0717 23:10:08.339510   59156 main.go:141] libmachine: Decoding PEM data...
	I0717 23:10:08.339524   59156 main.go:141] libmachine: Parsing certificate...
	I0717 23:10:08.339575   59156 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem
	I0717 23:10:08.339598   59156 main.go:141] libmachine: Decoding PEM data...
	I0717 23:10:08.339609   59156 main.go:141] libmachine: Parsing certificate...
	I0717 23:10:08.339623   59156 main.go:141] libmachine: Running pre-create checks...
	I0717 23:10:08.339632   59156 main.go:141] libmachine: (newest-cni-670356) Calling .PreCreateCheck
	I0717 23:10:08.339958   59156 main.go:141] libmachine: (newest-cni-670356) Calling .GetConfigRaw
	I0717 23:10:08.340336   59156 main.go:141] libmachine: Creating machine...
	I0717 23:10:08.340350   59156 main.go:141] libmachine: (newest-cni-670356) Calling .Create
	I0717 23:10:08.340482   59156 main.go:141] libmachine: (newest-cni-670356) Creating KVM machine...
	I0717 23:10:08.341763   59156 main.go:141] libmachine: (newest-cni-670356) DBG | found existing default KVM network
	I0717 23:10:08.343019   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:08.342840   59179 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6e:a7:3c} reservation:<nil>}
	I0717 23:10:08.344114   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:08.343987   59179 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a27e0}
	I0717 23:10:08.349338   59156 main.go:141] libmachine: (newest-cni-670356) DBG | trying to create private KVM network mk-newest-cni-670356 192.168.50.0/24...
	I0717 23:10:08.428151   59156 main.go:141] libmachine: (newest-cni-670356) DBG | private KVM network mk-newest-cni-670356 192.168.50.0/24 created
	I0717 23:10:08.428186   59156 main.go:141] libmachine: (newest-cni-670356) Setting up store path in /home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356 ...
	I0717 23:10:08.428204   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:08.427971   59179 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 23:10:08.428227   59156 main.go:141] libmachine: (newest-cni-670356) Building disk image from file:///home/jenkins/minikube-integration/16899-15759/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 23:10:08.428253   59156 main.go:141] libmachine: (newest-cni-670356) Downloading /home/jenkins/minikube-integration/16899-15759/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16899-15759/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso...
	I0717 23:10:08.648490   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:08.648315   59179 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356/id_rsa...
	I0717 23:10:08.948214   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:08.948082   59179 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356/newest-cni-670356.rawdisk...
	I0717 23:10:08.948269   59156 main.go:141] libmachine: (newest-cni-670356) DBG | Writing magic tar header
	I0717 23:10:08.948288   59156 main.go:141] libmachine: (newest-cni-670356) DBG | Writing SSH key tar header
	I0717 23:10:08.948308   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:08.948212   59179 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356 ...
	I0717 23:10:08.948387   59156 main.go:141] libmachine: (newest-cni-670356) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356 (perms=drwx------)
	I0717 23:10:08.948407   59156 main.go:141] libmachine: (newest-cni-670356) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube/machines (perms=drwxr-xr-x)
	I0717 23:10:08.948421   59156 main.go:141] libmachine: (newest-cni-670356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356
	I0717 23:10:08.948455   59156 main.go:141] libmachine: (newest-cni-670356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube/machines
	I0717 23:10:08.948465   59156 main.go:141] libmachine: (newest-cni-670356) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759/.minikube (perms=drwxr-xr-x)
	I0717 23:10:08.948475   59156 main.go:141] libmachine: (newest-cni-670356) Setting executable bit set on /home/jenkins/minikube-integration/16899-15759 (perms=drwxrwxr-x)
	I0717 23:10:08.948481   59156 main.go:141] libmachine: (newest-cni-670356) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 23:10:08.948490   59156 main.go:141] libmachine: (newest-cni-670356) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 23:10:08.948502   59156 main.go:141] libmachine: (newest-cni-670356) Creating domain...
	I0717 23:10:08.948522   59156 main.go:141] libmachine: (newest-cni-670356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 23:10:08.948541   59156 main.go:141] libmachine: (newest-cni-670356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16899-15759
	I0717 23:10:08.948553   59156 main.go:141] libmachine: (newest-cni-670356) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 23:10:08.948562   59156 main.go:141] libmachine: (newest-cni-670356) DBG | Checking permissions on dir: /home/jenkins
	I0717 23:10:08.948575   59156 main.go:141] libmachine: (newest-cni-670356) DBG | Checking permissions on dir: /home
	I0717 23:10:08.948594   59156 main.go:141] libmachine: (newest-cni-670356) DBG | Skipping /home - not owner
	I0717 23:10:08.949790   59156 main.go:141] libmachine: (newest-cni-670356) define libvirt domain using xml: 
	I0717 23:10:08.949811   59156 main.go:141] libmachine: (newest-cni-670356) <domain type='kvm'>
	I0717 23:10:08.949822   59156 main.go:141] libmachine: (newest-cni-670356)   <name>newest-cni-670356</name>
	I0717 23:10:08.949834   59156 main.go:141] libmachine: (newest-cni-670356)   <memory unit='MiB'>2200</memory>
	I0717 23:10:08.949854   59156 main.go:141] libmachine: (newest-cni-670356)   <vcpu>2</vcpu>
	I0717 23:10:08.949865   59156 main.go:141] libmachine: (newest-cni-670356)   <features>
	I0717 23:10:08.949874   59156 main.go:141] libmachine: (newest-cni-670356)     <acpi/>
	I0717 23:10:08.949882   59156 main.go:141] libmachine: (newest-cni-670356)     <apic/>
	I0717 23:10:08.949890   59156 main.go:141] libmachine: (newest-cni-670356)     <pae/>
	I0717 23:10:08.949895   59156 main.go:141] libmachine: (newest-cni-670356)     
	I0717 23:10:08.949909   59156 main.go:141] libmachine: (newest-cni-670356)   </features>
	I0717 23:10:08.949924   59156 main.go:141] libmachine: (newest-cni-670356)   <cpu mode='host-passthrough'>
	I0717 23:10:08.949944   59156 main.go:141] libmachine: (newest-cni-670356)   
	I0717 23:10:08.949956   59156 main.go:141] libmachine: (newest-cni-670356)   </cpu>
	I0717 23:10:08.949969   59156 main.go:141] libmachine: (newest-cni-670356)   <os>
	I0717 23:10:08.949977   59156 main.go:141] libmachine: (newest-cni-670356)     <type>hvm</type>
	I0717 23:10:08.949985   59156 main.go:141] libmachine: (newest-cni-670356)     <boot dev='cdrom'/>
	I0717 23:10:08.949993   59156 main.go:141] libmachine: (newest-cni-670356)     <boot dev='hd'/>
	I0717 23:10:08.950019   59156 main.go:141] libmachine: (newest-cni-670356)     <bootmenu enable='no'/>
	I0717 23:10:08.950043   59156 main.go:141] libmachine: (newest-cni-670356)   </os>
	I0717 23:10:08.950056   59156 main.go:141] libmachine: (newest-cni-670356)   <devices>
	I0717 23:10:08.950069   59156 main.go:141] libmachine: (newest-cni-670356)     <disk type='file' device='cdrom'>
	I0717 23:10:08.950096   59156 main.go:141] libmachine: (newest-cni-670356)       <source file='/home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356/boot2docker.iso'/>
	I0717 23:10:08.950113   59156 main.go:141] libmachine: (newest-cni-670356)       <target dev='hdc' bus='scsi'/>
	I0717 23:10:08.950123   59156 main.go:141] libmachine: (newest-cni-670356)       <readonly/>
	I0717 23:10:08.950131   59156 main.go:141] libmachine: (newest-cni-670356)     </disk>
	I0717 23:10:08.950149   59156 main.go:141] libmachine: (newest-cni-670356)     <disk type='file' device='disk'>
	I0717 23:10:08.950161   59156 main.go:141] libmachine: (newest-cni-670356)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 23:10:08.950173   59156 main.go:141] libmachine: (newest-cni-670356)       <source file='/home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356/newest-cni-670356.rawdisk'/>
	I0717 23:10:08.950181   59156 main.go:141] libmachine: (newest-cni-670356)       <target dev='hda' bus='virtio'/>
	I0717 23:10:08.950189   59156 main.go:141] libmachine: (newest-cni-670356)     </disk>
	I0717 23:10:08.950200   59156 main.go:141] libmachine: (newest-cni-670356)     <interface type='network'>
	I0717 23:10:08.950225   59156 main.go:141] libmachine: (newest-cni-670356)       <source network='mk-newest-cni-670356'/>
	I0717 23:10:08.950246   59156 main.go:141] libmachine: (newest-cni-670356)       <model type='virtio'/>
	I0717 23:10:08.950259   59156 main.go:141] libmachine: (newest-cni-670356)     </interface>
	I0717 23:10:08.950272   59156 main.go:141] libmachine: (newest-cni-670356)     <interface type='network'>
	I0717 23:10:08.950284   59156 main.go:141] libmachine: (newest-cni-670356)       <source network='default'/>
	I0717 23:10:08.950294   59156 main.go:141] libmachine: (newest-cni-670356)       <model type='virtio'/>
	I0717 23:10:08.950307   59156 main.go:141] libmachine: (newest-cni-670356)     </interface>
	I0717 23:10:08.950324   59156 main.go:141] libmachine: (newest-cni-670356)     <serial type='pty'>
	I0717 23:10:08.950337   59156 main.go:141] libmachine: (newest-cni-670356)       <target port='0'/>
	I0717 23:10:08.950349   59156 main.go:141] libmachine: (newest-cni-670356)     </serial>
	I0717 23:10:08.950367   59156 main.go:141] libmachine: (newest-cni-670356)     <console type='pty'>
	I0717 23:10:08.950376   59156 main.go:141] libmachine: (newest-cni-670356)       <target type='serial' port='0'/>
	I0717 23:10:08.950393   59156 main.go:141] libmachine: (newest-cni-670356)     </console>
	I0717 23:10:08.950412   59156 main.go:141] libmachine: (newest-cni-670356)     <rng model='virtio'>
	I0717 23:10:08.950429   59156 main.go:141] libmachine: (newest-cni-670356)       <backend model='random'>/dev/random</backend>
	I0717 23:10:08.950437   59156 main.go:141] libmachine: (newest-cni-670356)     </rng>
	I0717 23:10:08.950443   59156 main.go:141] libmachine: (newest-cni-670356)     
	I0717 23:10:08.950452   59156 main.go:141] libmachine: (newest-cni-670356)     
	I0717 23:10:08.950461   59156 main.go:141] libmachine: (newest-cni-670356)   </devices>
	I0717 23:10:08.950468   59156 main.go:141] libmachine: (newest-cni-670356) </domain>
	I0717 23:10:08.950485   59156 main.go:141] libmachine: (newest-cni-670356) 
	I0717 23:10:08.954676   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:b4:fe:81 in network default
	I0717 23:10:08.955264   59156 main.go:141] libmachine: (newest-cni-670356) Ensuring networks are active...
	I0717 23:10:08.955288   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:08.956027   59156 main.go:141] libmachine: (newest-cni-670356) Ensuring network default is active
	I0717 23:10:08.956460   59156 main.go:141] libmachine: (newest-cni-670356) Ensuring network mk-newest-cni-670356 is active
	I0717 23:10:08.956976   59156 main.go:141] libmachine: (newest-cni-670356) Getting domain xml...
	I0717 23:10:08.957652   59156 main.go:141] libmachine: (newest-cni-670356) Creating domain...
	I0717 23:10:09.343174   59156 main.go:141] libmachine: (newest-cni-670356) Waiting to get IP...
	I0717 23:10:09.344179   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:09.344591   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:09.344621   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:09.344577   59179 retry.go:31] will retry after 229.720752ms: waiting for machine to come up
	I0717 23:10:09.576138   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:09.576793   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:09.576825   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:09.576749   59179 retry.go:31] will retry after 330.127539ms: waiting for machine to come up
	I0717 23:10:09.908921   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:09.909424   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:09.909465   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:09.909408   59179 retry.go:31] will retry after 367.546946ms: waiting for machine to come up
	I0717 23:10:10.278956   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:10.279421   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:10.279454   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:10.279373   59179 retry.go:31] will retry after 436.060188ms: waiting for machine to come up
	I0717 23:10:10.716880   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:10.717484   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:10.717504   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:10.717433   59179 retry.go:31] will retry after 705.925308ms: waiting for machine to come up
	I0717 23:10:11.425466   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:11.426063   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:11.426113   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:11.425985   59179 retry.go:31] will retry after 600.60635ms: waiting for machine to come up
	I0717 23:10:12.028585   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:12.029039   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:12.029067   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:12.028973   59179 retry.go:31] will retry after 892.988329ms: waiting for machine to come up
	I0717 23:10:12.923898   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:12.924370   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:12.924395   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:12.924334   59179 retry.go:31] will retry after 1.024160485s: waiting for machine to come up
	I0717 23:10:13.949710   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:13.950139   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:13.950157   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:13.950105   59179 retry.go:31] will retry after 1.564113796s: waiting for machine to come up
	I0717 23:10:15.516814   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:15.517273   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:15.517315   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:15.517217   59179 retry.go:31] will retry after 1.717370162s: waiting for machine to come up
	I0717 23:10:17.237280   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:17.237811   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:17.237842   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:17.237765   59179 retry.go:31] will retry after 2.165302742s: waiting for machine to come up
	I0717 23:10:19.404604   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:19.405118   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:19.405171   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:19.405073   59179 retry.go:31] will retry after 3.361902421s: waiting for machine to come up
	I0717 23:10:22.768508   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:22.768988   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:22.769013   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:22.768943   59179 retry.go:31] will retry after 3.941807109s: waiting for machine to come up
	I0717 23:10:26.714366   59156 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:10:26.714938   59156 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:10:26.714966   59156 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:10:26.714860   59179 retry.go:31] will retry after 4.581932302s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 22:51:06 UTC, ends at Mon 2023-07-17 23:10:32 UTC. --
	Jul 17 23:10:31 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:31.567318559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3fb248d0-8241-4a57-a151-8aafc73bd30e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.243698494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ed0ca8e4-8c4e-41a4-9b47-5439f07a32f4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.243766655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ed0ca8e4-8c4e-41a4-9b47-5439f07a32f4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.243986549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ed0ca8e4-8c4e-41a4-9b47-5439f07a32f4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.289315947Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=506c74cd-9d86-4580-af60-af46f4a44c41 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.289419267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=506c74cd-9d86-4580-af60-af46f4a44c41 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.289590276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=506c74cd-9d86-4580-af60-af46f4a44c41 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.323917500Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=84597fbb-8b8f-4a8f-b787-88650d687f5b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.324010619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=84597fbb-8b8f-4a8f-b787-88650d687f5b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.324239318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=84597fbb-8b8f-4a8f-b787-88650d687f5b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.358587482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=16965658-7618-499d-842f-351f81d08e21 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.358672601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=16965658-7618-499d-842f-351f81d08e21 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.358833540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=16965658-7618-499d-842f-351f81d08e21 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.397856523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cf00ffe6-025f-47c2-9154-ad0f0a5aa174 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.397956172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cf00ffe6-025f-47c2-9154-ad0f0a5aa174 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.398110970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cf00ffe6-025f-47c2-9154-ad0f0a5aa174 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.434778862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=20ddcb90-805f-44ef-a1ec-d3b20814b069 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.434866814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=20ddcb90-805f-44ef-a1ec-d3b20814b069 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.435043552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=20ddcb90-805f-44ef-a1ec-d3b20814b069 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.473727850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4e3db2e4-8210-4016-bc0d-ede11f29ec02 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.473830244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4e3db2e4-8210-4016-bc0d-ede11f29ec02 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.474082752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4e3db2e4-8210-4016-bc0d-ede11f29ec02 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.507547635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e8583b49-a142-4ebc-b505-d9ccd974d131 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.507640969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e8583b49-a142-4ebc-b505-d9ccd974d131 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:32 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:10:32.507897158Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e8583b49-a142-4ebc-b505-d9ccd974d131 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	4633e9baf3307       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   4eacdea27ce5a
	30afb33a6d03f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   95235fcef3c17
	a74d33cec1e84       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   13 minutes ago      Running             kube-proxy                0                   3c32ee1224266
	7267626b74cd3       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   14 minutes ago      Running             etcd                      2                   380099398de21
	4ea5728b3af9b       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   14 minutes ago      Running             kube-scheduler            2                   fc683fbf695fa
	45949cc457a02       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   14 minutes ago      Running             kube-apiserver            2                   dcc733eb7ced4
	7853c0ad23d63       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   14 minutes ago      Running             kube-controller-manager   2                   5456e993c07cf
	
	* 
	* ==> coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51522 - 14800 "HINFO IN 351336808927452243.1519533743927132388. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00752014s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-504828
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-504828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=default-k8s-diff-port-504828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_56_37_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:56:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-504828
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 23:10:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 23:07:11 +0000   Mon, 17 Jul 2023 22:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 23:07:11 +0000   Mon, 17 Jul 2023 22:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 23:07:11 +0000   Mon, 17 Jul 2023 22:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 23:07:11 +0000   Mon, 17 Jul 2023 22:56:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.118
	  Hostname:    default-k8s-diff-port-504828
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 968e7df5c4a84974bf4bfbd3b75f21df
	  System UUID:                968e7df5-c4a8-4974-bf4b-fbd3b75f21df
	  Boot ID:                    92ae26ce-42b7-4dfc-887b-53002c0c83b2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-rqcjj                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-default-k8s-diff-port-504828                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-504828             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-504828    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-nmtc8                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-504828             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-74d5c6b9c-j8f2f                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node default-k8s-diff-port-504828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node default-k8s-diff-port-504828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node default-k8s-diff-port-504828 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m   kubelet          Node default-k8s-diff-port-504828 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m   kubelet          Node default-k8s-diff-port-504828 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node default-k8s-diff-port-504828 event: Registered Node default-k8s-diff-port-504828 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 22:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076290] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul17 22:51] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.571021] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.164202] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.726842] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.651350] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.114891] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.194174] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.126073] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.235906] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[ +17.537823] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[ +16.404303] kauditd_printk_skb: 24 callbacks suppressed
	[Jul17 22:56] systemd-fstab-generator[3500]: Ignoring "noauto" for root device
	[  +9.810584] systemd-fstab-generator[3821]: Ignoring "noauto" for root device
	[ +23.725586] kauditd_printk_skb: 11 callbacks suppressed
	
	* 
	* ==> etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] <==
	* {"level":"info","ts":"2023-07-17T22:56:31.367Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"adc6509a13463106","initial-advertise-peer-urls":["https://192.168.72.118:2380"],"listen-peer-urls":["https://192.168.72.118:2380"],"advertise-client-urls":["https://192.168.72.118:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.118:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T22:56:31.367Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T22:56:31.371Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.72.118:2380"}
	{"level":"info","ts":"2023-07-17T22:56:31.371Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.72.118:2380"}
	{"level":"info","ts":"2023-07-17T22:56:31.526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-17T22:56:31.526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T22:56:31.526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 received MsgPreVoteResp from adc6509a13463106 at term 1"}
	{"level":"info","ts":"2023-07-17T22:56:31.526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T22:56:31.526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 received MsgVoteResp from adc6509a13463106 at term 2"}
	{"level":"info","ts":"2023-07-17T22:56:31.526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 became leader at term 2"}
	{"level":"info","ts":"2023-07-17T22:56:31.526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: adc6509a13463106 elected leader adc6509a13463106 at term 2"}
	{"level":"info","ts":"2023-07-17T22:56:31.532Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:56:31.535Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"adc6509a13463106","local-member-attributes":"{Name:default-k8s-diff-port-504828 ClientURLs:[https://192.168.72.118:2379]}","request-path":"/0/members/adc6509a13463106/attributes","cluster-id":"fa04419eb9ff79c4","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:56:31.535Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:56:31.536Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.72.118:2379"}
	{"level":"info","ts":"2023-07-17T22:56:31.537Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:56:31.537Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T22:56:31.564Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:56:31.575Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T22:56:31.580Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa04419eb9ff79c4","local-member-id":"adc6509a13463106","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:56:31.580Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:56:31.580Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:06:31.600Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":688}
	{"level":"info","ts":"2023-07-17T23:06:31.603Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":688,"took":"2.540855ms","hash":3055155226}
	{"level":"info","ts":"2023-07-17T23:06:31.603Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3055155226,"revision":688,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  23:10:32 up 19 min,  0 users,  load average: 0.65, 0.31, 0.28
	Linux default-k8s-diff-port-504828 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] <==
	* E0717 23:06:34.744895       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:06:34.744934       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0717 23:06:34.744991       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:06:34.746323       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:07:33.575878       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.159.95:443: connect: connection refused
	I0717 23:07:33.576278       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 23:07:34.746335       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:07:34.746400       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:07:34.746419       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:07:34.747429       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:07:34.747520       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:07:34.747550       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:08:33.575826       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.159.95:443: connect: connection refused
	I0717 23:08:33.576040       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 23:09:33.576860       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.159.95:443: connect: connection refused
	I0717 23:09:33.577106       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 23:09:34.746873       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:09:34.747048       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:09:34.747121       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:09:34.748021       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:09:34.748086       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:09:34.749197       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] <==
	* W0717 23:04:20.459673       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:04:49.945551       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:04:50.468615       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:05:19.952667       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:05:20.478258       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:05:49.959998       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:05:50.486762       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:06:19.965003       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:06:20.499439       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:06:49.972042       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:06:50.507935       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:07:19.978257       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:07:20.517118       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:07:49.985867       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:07:50.526972       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:08:19.990868       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:08:20.536329       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:08:49.996767       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:08:50.544633       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:09:20.003079       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:09:20.554694       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:09:50.009832       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:09:50.564096       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:10:20.017540       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:10:20.574820       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] <==
	* I0717 22:56:55.222387       1 node.go:141] Successfully retrieved node IP: 192.168.72.118
	I0717 22:56:55.222672       1 server_others.go:110] "Detected node IP" address="192.168.72.118"
	I0717 22:56:55.222763       1 server_others.go:554] "Using iptables proxy"
	I0717 22:56:55.312287       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 22:56:55.312396       1 server_others.go:192] "Using iptables Proxier"
	I0717 22:56:55.312533       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 22:56:55.313643       1 server.go:658] "Version info" version="v1.27.3"
	I0717 22:56:55.313813       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:56:55.318002       1 config.go:188] "Starting service config controller"
	I0717 22:56:55.318490       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 22:56:55.318761       1 config.go:97] "Starting endpoint slice config controller"
	I0717 22:56:55.319002       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 22:56:55.330515       1 config.go:315] "Starting node config controller"
	I0717 22:56:55.334251       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 22:56:55.418967       1 shared_informer.go:318] Caches are synced for service config
	I0717 22:56:55.419231       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 22:56:55.435766       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] <==
	* W0717 22:56:33.792004       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:56:33.792074       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 22:56:33.792252       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:56:33.795248       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 22:56:33.795415       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:56:33.795535       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 22:56:33.796113       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 22:56:33.796241       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 22:56:33.796697       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:56:33.796897       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 22:56:33.798350       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 22:56:33.798611       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 22:56:34.626320       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:56:34.626449       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 22:56:34.646934       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 22:56:34.647017       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 22:56:34.691493       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 22:56:34.691544       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 22:56:34.753196       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:56:34.753252       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 22:56:34.857013       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:56:34.857246       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 22:56:35.263941       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 22:56:35.263998       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 22:56:37.365604       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 22:51:06 UTC, ends at Mon 2023-07-17 23:10:33 UTC. --
	Jul 17 23:07:37 default-k8s-diff-port-504828 kubelet[3828]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:07:49 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:07:49.561418    3828 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 23:07:49 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:07:49.561521    3828 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 23:07:49 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:07:49.561680    3828 kuberuntime_manager.go:1212] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-slbcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod metrics-server-74d5c6b9c-j8f2f_kube-system(328c892b-7402-480b-bc29-a316c8fb7b1f): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 23:07:49 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:07:49.561712    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:08:00 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:08:00.549069    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:08:15 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:08:15.550251    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:08:30 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:08:30.549102    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:08:37 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:08:37.639961    3828 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 23:08:37 default-k8s-diff-port-504828 kubelet[3828]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:08:37 default-k8s-diff-port-504828 kubelet[3828]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:08:37 default-k8s-diff-port-504828 kubelet[3828]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:08:41 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:08:41.549698    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:08:52 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:08:52.549197    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:09:07 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:09:07.550759    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:09:18 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:09:18.549288    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:09:29 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:09:29.552803    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:09:37 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:09:37.639760    3828 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 23:09:37 default-k8s-diff-port-504828 kubelet[3828]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:09:37 default-k8s-diff-port-504828 kubelet[3828]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:09:37 default-k8s-diff-port-504828 kubelet[3828]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:09:42 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:09:42.549322    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:09:57 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:09:57.549542    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:10:08 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:10:08.548731    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:10:19 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:10:19.551502    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	
	* 
	* ==> storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] <==
	* I0717 22:56:55.650544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 22:56:55.661900       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 22:56:55.662045       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 22:56:55.675926       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 22:56:55.677091       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c339a55d-3fdc-4f37-b597-026e65addd23", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-504828_69d84a11-bd06-4f89-90fb-b0fd139857e2 became leader
	I0717 22:56:55.677264       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-504828_69d84a11-bd06-4f89-90fb-b0fd139857e2!
	I0717 22:56:55.781523       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-504828_69d84a11-bd06-4f89-90fb-b0fd139857e2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-504828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-j8f2f
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-504828 describe pod metrics-server-74d5c6b9c-j8f2f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-504828 describe pod metrics-server-74d5c6b9c-j8f2f: exit status 1 (67.190665ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-j8f2f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-504828 describe pod metrics-server-74d5c6b9c-j8f2f: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (429.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 23:05:31.747182   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 23:06:54.798861   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 23:07:28.100991   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-935524 -n no-preload-935524
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-07-17 23:12:13.510807305 +0000 UTC m=+5497.300593306
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-935524 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-935524 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.394µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-935524 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-935524 -n no-preload-935524
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-935524 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-935524 logs -n 25: (1.351920175s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-332820        | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-571296            | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-935524             | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-504828  | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-332820             | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-571296                 | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 23:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-935524                  | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504828       | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 22:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 23:01 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 23:10 UTC | 17 Jul 23 23:10 UTC |
	| start   | -p newest-cni-670356 --memory=2200 --alsologtostderr   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:10 UTC | 17 Jul 23 23:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-670356             | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:11 UTC | 17 Jul 23 23:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-670356                                   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:11 UTC | 17 Jul 23 23:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-670356                  | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:11 UTC | 17 Jul 23 23:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-670356 --memory=2200 --alsologtostderr   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:11 UTC | 17 Jul 23 23:12 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 23:12 UTC | 17 Jul 23 23:12 UTC |
	| ssh     | -p newest-cni-670356 sudo                              | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:12 UTC | 17 Jul 23 23:12 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-670356                                   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:12 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 23:12:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 23:12:12.494540   60278 out.go:296] Setting OutFile to fd 1 ...
	I0717 23:12:12.494670   60278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:12:12.494678   60278 out.go:309] Setting ErrFile to fd 2...
	I0717 23:12:12.494683   60278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:12:12.494971   60278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 23:12:12.496346   60278 out.go:303] Setting JSON to false
	I0717 23:12:12.497546   60278 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10485,"bootTime":1689625048,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 23:12:12.497634   60278 start.go:138] virtualization: kvm guest
	I0717 23:12:12.499699   60278 out.go:177] * [auto-987609] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 23:12:12.501770   60278 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 23:12:12.501828   60278 notify.go:220] Checking for updates...
	I0717 23:12:12.503286   60278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 23:12:12.505095   60278 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 23:12:12.506984   60278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 23:12:12.508930   60278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 23:12:12.510931   60278 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 23:12:12.514277   60278 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:12:12.514528   60278 config.go:182] Loaded profile config "newest-cni-670356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:12:12.514710   60278 config.go:182] Loaded profile config "no-preload-935524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:12:12.514888   60278 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 23:12:12.562050   60278 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 23:12:12.563826   60278 start.go:298] selected driver: kvm2
	I0717 23:12:12.563844   60278 start.go:880] validating driver "kvm2" against <nil>
	I0717 23:12:12.563857   60278 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 23:12:12.564661   60278 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 23:12:12.564757   60278 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 23:12:12.587236   60278 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 23:12:12.587316   60278 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 23:12:12.587638   60278 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 23:12:12.587684   60278 cni.go:84] Creating CNI manager for ""
	I0717 23:12:12.587701   60278 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 23:12:12.587708   60278 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 23:12:12.587718   60278 start_flags.go:319] config:
	{Name:auto-987609 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-987609 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni Feat
ureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:12:12.587886   60278 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 23:12:12.590201   60278 out.go:177] * Starting control plane node auto-987609 in cluster auto-987609
	I0717 23:12:12.591751   60278 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 23:12:12.591801   60278 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 23:12:12.591821   60278 cache.go:57] Caching tarball of preloaded images
	I0717 23:12:12.591938   60278 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 23:12:12.591953   60278 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 23:12:12.592085   60278 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/auto-987609/config.json ...
	I0717 23:12:12.592109   60278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/auto-987609/config.json: {Name:mk72bf050e1345572585f2d535c8d8ed4ea48e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:12:12.592282   60278 start.go:365] acquiring machines lock for auto-987609: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 23:12:12.592327   60278 start.go:369] acquired machines lock for "auto-987609" in 23.637µs
	I0717 23:12:12.592350   60278 start.go:93] Provisioning new machine with config: &{Name:auto-987609 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-987609 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 23:12:12.592423   60278 start.go:125] createHost starting for "" (driver="kvm2")
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 22:50:43 UTC, ends at Mon 2023-07-17 23:12:14 UTC. --
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.105559334Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cc80e6b7-232e-4d83-b7f6-447100c1973b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.105999731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cc80e6b7-232e-4d83-b7f6-447100c1973b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.151722270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6fc5101f-b811-457a-badf-b06dd95cd082 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.151852295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6fc5101f-b811-457a-badf-b06dd95cd082 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.152128243Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6fc5101f-b811-457a-badf-b06dd95cd082 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.203734700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aa7efe42-651c-4676-abbe-5c977adaa303 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.203855555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aa7efe42-651c-4676-abbe-5c977adaa303 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.204105139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aa7efe42-651c-4676-abbe-5c977adaa303 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.248819201Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f228fb9a-f573-4864-addc-dda0b5ce0f55 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.248908339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f228fb9a-f573-4864-addc-dda0b5ce0f55 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.249192848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f228fb9a-f573-4864-addc-dda0b5ce0f55 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.288762481Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9be450bb-2a61-46a8-9f88-84f8c3975566 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.288959392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9be450bb-2a61-46a8-9f88-84f8c3975566 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.289296862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9be450bb-2a61-46a8-9f88-84f8c3975566 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.329027188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6f59fb5a-a7b4-4c84-92de-78fbb8e5cb67 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.329108847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6f59fb5a-a7b4-4c84-92de-78fbb8e5cb67 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.330560554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6f59fb5a-a7b4-4c84-92de-78fbb8e5cb67 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.366169722Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=8d505166-7de4-4f4c-8b84-1f00a4beee73 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.366568945Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-2mpst,Uid:7516b57f-a4cb-4e2f-995e-8e063bed22ae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634303654424753,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:51:35.651731739Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&PodSandboxMetadata{Name:busybox,Uid:dcf23863-eb23-4dfc-91c8-866a27d56aa7,Namespace:default,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1689634303644937736,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:51:35.651716363Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f5ca97d916e4d004b7c51e61f4548011250a8cb58c8de08eb189e2e3e508fc4,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5c6b9c-tlbpl,Uid:7c478efe-4435-45dd-a688-745872fc2918,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634300917336979,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5c6b9c-tlbpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c478efe-4435-45dd-a688-745872fc2918,k8s-app: metrics-server,pod-template-hash: 74d5c6b9c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:51:35.6517
27635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:85812d54-7a57-430b-991e-e301f123a86a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634296019814543,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-mini
kube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T22:51:35.651729418Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&PodSandboxMetadata{Name:kube-proxy-qhp66,Uid:8bc95955-b7ba-41e3-ac67-604a9695f784,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634296016001704,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b7ba-41e3-ac67-604a9695f784,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/co
nfig.seen: 2023-07-17T22:51:35.651725890Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-935524,Uid:f2fc722d6f7af09db92d907e47260519,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634289211695893,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f2fc722d6f7af09db92d907e47260519,kubernetes.io/config.seen: 2023-07-17T22:51:28.643973980Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-935524,Uid:3bae05c026731489afedf650b3c97278,Namespace:kube
-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634289197674112,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.6:8443,kubernetes.io/config.hash: 3bae05c026731489afedf650b3c97278,kubernetes.io/config.seen: 2023-07-17T22:51:28.643971934Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-935524,Uid:92baac5ff4aef0bdc09a7e86a9f715db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634289188260097,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.6:2379,kubernetes.io/config.hash: 92baac5ff4aef0bdc09a7e86a9f715db,kubernetes.io/config.seen: 2023-07-17T22:51:28.643967432Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-935524,Uid:b2084677272e90c7a54057bf2dd1092d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634289181834444,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b2084677272e90c7a54057bf2dd1092d,kubernete
s.io/config.seen: 2023-07-17T22:51:28.643973099Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=8d505166-7de4-4f4c-8b84-1f00a4beee73 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.367346825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d7ce972c-866d-4f80-80c5-e53a167e0803 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.367460173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d7ce972c-866d-4f80-80c5-e53a167e0803 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.367843681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d7ce972c-866d-4f80-80c5-e53a167e0803 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.377971714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9b761dfb-4018-43ee-8354-f65fc08114d3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.378058905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9b761dfb-4018-43ee-8354-f65fc08114d3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:14 no-preload-935524 crio[717]: time="2023-07-17 23:12:14.378355797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1689634327958347539,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a57-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:261a700a8907986d0f966b98d34351dfd9336e43704a40c37776c1ed63450241,PodSandboxId:040b35ae9ad790ae7437267dedcfff686f68e7be033dd9162aca001980b6523d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1689634305663330716,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dcf23863-eb23-4dfc-91c8-866a27d56aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 973fd6c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266,PodSandboxId:f902332e9e90689e37f58ce26a95d7ab0f14618710c232deeaf87c7ea8406702,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1689634304354928918,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-2mpst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7516b57f-a4cb-4e2f-995e-8e063bed22ae,},Annotations:map[string]string{io.kubernetes.container.hash: c29e4e62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567,PodSandboxId:51533f726d16a7a25c8f1df3b069554994196fc755a027e66f62ce82cc4ef14f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:5c781e52b7df26c513fd7a17cb064f14d0e54ec60c78bb251d66373acabee06f,State:CONTAINER_RUNNING,CreatedAt:1689634297112719059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qhp66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8bc95955-b
7ba-41e3-ac67-604a9695f784,},Annotations:map[string]string{io.kubernetes.container.hash: 699cfe2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6,PodSandboxId:60a1553845355bd0feeaf99e11c3c2223a4336bb3b2a0eb1cb8c32e0984866fc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1689634296971941342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85812d54-7a5
7-430b-991e-e301f123a86a,},Annotations:map[string]string{io.kubernetes.container.hash: 57eef15c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea,PodSandboxId:4df7366612b3156863fa7782df6ce3f5f90b3ae71ac30ca4025309c9a266320b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:debe9299a031af8da434bff972ab85d954976787aea87f06307f9212e6c94efb,State:CONTAINER_RUNNING,CreatedAt:1689634290450992126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92baac5ff4aef0bdc09a7e86a9f715db,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 27a9a45e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629,PodSandboxId:9772f73a659f4549b64414a8ab63f4c9f3e50cb794ca280b596e104220f6529c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:1312d079d0fd189075a309c99b061074e04ca1796d87edf7fd5f7aa7ea021439,State:CONTAINER_RUNNING,CreatedAt:1689634290189634573,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fc722d6f7af09db92d907e47260519,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f,PodSandboxId:562bec26ceed6b076fa95d86bc8316044aca8e3aa08dccfd15f33744253a910c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:d1c84bdb695b9d4c6d5a5ffc1370dd8e39cc36a06772e31f78deed3e29ac2cef,State:CONTAINER_RUNNING,CreatedAt:1689634289922162015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bae05c026731489afedf650b3c97278,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: aa189e2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c,PodSandboxId:fd17cc14d6355da6992dd10c10923917d48d3166cda4a978b71e1f5035bd7ce9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5e1d3a533b714d2b891c6bdbdcbe86051e96eb7a8b76c751f540f7d4de3fd133,State:CONTAINER_RUNNING,CreatedAt:1689634289731127688,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-935524,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2084677272e90c7a54057bf2dd1092d,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9b761dfb-4018-43ee-8354-f65fc08114d3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	a67aa752ac1c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       3                   60a1553845355
	261a700a89079       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   040b35ae9ad79
	acfd42b72df4e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   f902332e9e906
	9d9c7f49bf240       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      20 minutes ago      Running             kube-proxy                1                   51533f726d16a
	4d1cbdc04001f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       2                   60a1553845355
	98d6ff57de0a6       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      20 minutes ago      Running             etcd                      1                   4df7366612b31
	692978c127c58       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      20 minutes ago      Running             kube-scheduler            1                   9772f73a659f4
	c809651d0696d       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      20 minutes ago      Running             kube-apiserver            1                   562bec26ceed6
	f0b0c765bf6d1       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      20 minutes ago      Running             kube-controller-manager   1                   fd17cc14d6355
	
	* 
	* ==> coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43682 - 9760 "HINFO IN 8743738622397940181.1830343981996442493. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007463283s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-935524
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-935524
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=no-preload-935524
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_43_54_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:43:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-935524
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 23:12:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 23:07:23 +0000   Mon, 17 Jul 2023 22:43:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 23:07:23 +0000   Mon, 17 Jul 2023 22:43:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 23:07:23 +0000   Mon, 17 Jul 2023 22:43:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 23:07:23 +0000   Mon, 17 Jul 2023 22:51:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    no-preload-935524
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e3c6fd294d54e4a8c1cf33a06e3109f
	  System UUID:                5e3c6fd2-94d5-4e4a-8c1c-f33a06e3109f
	  Boot ID:                    4c435d91-69b7-4bb5-af25-116bb7b7e15d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5d78c9869d-2mpst                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-935524                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-935524             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-935524    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-qhp66                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-935524             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-74d5c6b9c-tlbpl               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 28m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-935524 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-935524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-935524 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-935524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-935524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-935524 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node no-preload-935524 status is now: NodeReady
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-935524 event: Registered Node no-preload-935524 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-935524 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-935524 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-935524 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node no-preload-935524 event: Registered Node no-preload-935524 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 22:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.081235] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.519377] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.565724] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156384] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.586780] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.796455] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.133854] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.146421] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.106068] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.256775] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[Jul17 22:51] systemd-fstab-generator[1236]: Ignoring "noauto" for root device
	[ +15.358306] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] <==
	* {"level":"info","ts":"2023-07-17T23:10:40.222Z","caller":"traceutil/trace.go:171","msg":"trace[1101342372] transaction","detail":"{read_only:false; response_revision:1545; number_of_response:1; }","duration":"233.884253ms","start":"2023-07-17T23:10:39.988Z","end":"2023-07-17T23:10:40.222Z","steps":["trace[1101342372] 'process raft request'  (duration: 233.456023ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:11:33.254Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1344}
	{"level":"info","ts":"2023-07-17T23:11:33.257Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1344,"took":"2.073636ms","hash":3105350198}
	{"level":"info","ts":"2023-07-17T23:11:33.257Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3105350198,"revision":1344,"compact-revision":1101}
	{"level":"info","ts":"2023-07-17T23:11:49.663Z","caller":"traceutil/trace.go:171","msg":"trace[360628098] transaction","detail":"{read_only:false; response_revision:1600; number_of_response:1; }","duration":"963.879463ms","start":"2023-07-17T23:11:48.699Z","end":"2023-07-17T23:11:49.663Z","steps":["trace[360628098] 'process raft request'  (duration: 963.755619ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:11:49.664Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T23:11:48.699Z","time spent":"964.51924ms","remote":"127.0.0.1:36282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1598 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-07-17T23:11:50.110Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.843634ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11349222132375749154 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-4rseqynxskwzjab6n3ldlcty3y\" mod_revision:1592 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-4rseqynxskwzjab6n3ldlcty3y\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-4rseqynxskwzjab6n3ldlcty3y\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-07-17T23:11:50.111Z","caller":"traceutil/trace.go:171","msg":"trace[2028428376] linearizableReadLoop","detail":"{readStateIndex:1891; appliedIndex:1890; }","duration":"603.540704ms","start":"2023-07-17T23:11:49.507Z","end":"2023-07-17T23:11:50.110Z","steps":["trace[2028428376] 'read index received'  (duration: 155.762448ms)","trace[2028428376] 'applied index is now lower than readState.Index'  (duration: 447.7756ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T23:11:50.111Z","caller":"traceutil/trace.go:171","msg":"trace[982428005] transaction","detail":"{read_only:false; response_revision:1601; number_of_response:1; }","duration":"1.105142226s","start":"2023-07-17T23:11:49.005Z","end":"2023-07-17T23:11:50.111Z","steps":["trace[982428005] 'process raft request'  (duration: 973.971372ms)","trace[982428005] 'compare'  (duration: 129.463926ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T23:11:50.111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T23:11:49.005Z","time spent":"1.105236334s","remote":"127.0.0.1:36308","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":681,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-4rseqynxskwzjab6n3ldlcty3y\" mod_revision:1592 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-4rseqynxskwzjab6n3ldlcty3y\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-4rseqynxskwzjab6n3ldlcty3y\" > >"}
	{"level":"warn","ts":"2023-07-17T23:11:50.111Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"604.155463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T23:11:50.111Z","caller":"traceutil/trace.go:171","msg":"trace[1623694597] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1601; }","duration":"604.251512ms","start":"2023-07-17T23:11:49.507Z","end":"2023-07-17T23:11:50.111Z","steps":["trace[1623694597] 'agreement among raft nodes before linearized reading'  (duration: 604.032278ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:11:50.111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T23:11:49.507Z","time spent":"604.554028ms","remote":"127.0.0.1:36244","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-07-17T23:11:50.360Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.037859ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11349222132375749155 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-935524\" mod_revision:1593 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-935524\" value_size:498 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-935524\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-07-17T23:11:50.360Z","caller":"traceutil/trace.go:171","msg":"trace[390095271] linearizableReadLoop","detail":"{readStateIndex:1892; appliedIndex:1891; }","duration":"249.654112ms","start":"2023-07-17T23:11:50.111Z","end":"2023-07-17T23:11:50.360Z","steps":["trace[390095271] 'read index received'  (duration: 120.255953ms)","trace[390095271] 'applied index is now lower than readState.Index'  (duration: 129.396928ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T23:11:50.360Z","caller":"traceutil/trace.go:171","msg":"trace[1247072151] transaction","detail":"{read_only:false; response_revision:1602; number_of_response:1; }","duration":"337.420533ms","start":"2023-07-17T23:11:50.023Z","end":"2023-07-17T23:11:50.360Z","steps":["trace[1247072151] 'process raft request'  (duration: 208.047353ms)","trace[1247072151] 'compare'  (duration: 128.949102ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T23:11:50.360Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T23:11:50.023Z","time spent":"337.523679ms","remote":"127.0.0.1:36308","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":556,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-935524\" mod_revision:1593 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-935524\" value_size:498 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-935524\" > >"}
	{"level":"warn","ts":"2023-07-17T23:11:50.361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"530.879801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-07-17T23:11:50.361Z","caller":"traceutil/trace.go:171","msg":"trace[1939889687] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:1602; }","duration":"530.940355ms","start":"2023-07-17T23:11:49.830Z","end":"2023-07-17T23:11:50.361Z","steps":["trace[1939889687] 'agreement among raft nodes before linearized reading'  (duration: 530.731466ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:11:50.361Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T23:11:49.830Z","time spent":"531.030766ms","remote":"127.0.0.1:36288","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":40,"response size":29,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true "}
	{"level":"warn","ts":"2023-07-17T23:11:50.361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.392277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T23:11:50.361Z","caller":"traceutil/trace.go:171","msg":"trace[1544784846] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1602; }","duration":"247.465319ms","start":"2023-07-17T23:11:50.113Z","end":"2023-07-17T23:11:50.361Z","steps":["trace[1544784846] 'agreement among raft nodes before linearized reading'  (duration: 247.356773ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:11:50.361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"692.036834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T23:11:50.361Z","caller":"traceutil/trace.go:171","msg":"trace[1360711281] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1602; }","duration":"692.102131ms","start":"2023-07-17T23:11:49.669Z","end":"2023-07-17T23:11:50.361Z","steps":["trace[1360711281] 'agreement among raft nodes before linearized reading'  (duration: 692.000216ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:11:50.361Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T23:11:49.669Z","time spent":"692.25607ms","remote":"127.0.0.1:36286","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":27,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	
	* 
	* ==> kernel <==
	*  23:12:14 up 21 min,  0 users,  load average: 0.16, 0.14, 0.11
	Linux no-preload-935524 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] <==
	* I0717 23:09:36.394644       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:10:35.195064       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.173.99:443: connect: connection refused
	I0717 23:10:35.195122       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 23:11:35.194790       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.173.99:443: connect: connection refused
	I0717 23:11:35.194842       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 23:11:35.398121       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.173.99:443: connect: connection refused
	I0717 23:11:35.398173       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 23:11:36.398668       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:11:36.398918       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:11:36.398957       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:11:36.398732       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:11:36.399021       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:11:36.400396       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:11:49.665286       1 trace.go:219] Trace[2104157006]: "Update" accept:application/json, */*,audit-id:199a8eda-a3d2-486c-a890-1a28708c7058,client:192.168.39.6,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (17-Jul-2023 23:11:48.696) (total time: 968ms):
	Trace[2104157006]: ["GuaranteedUpdate etcd3" audit-id:199a8eda-a3d2-486c-a890-1a28708c7058,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 967ms (23:11:48.697)
	Trace[2104157006]:  ---"Txn call completed" 966ms (23:11:49.665)]
	Trace[2104157006]: [968.300914ms] [968.300914ms] END
	I0717 23:11:50.112735       1 trace.go:219] Trace[2086379880]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:d1d7bb1d-c54d-4133-b3c2-2cdceee0dfbb,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-4rseqynxskwzjab6n3ldlcty3y,user-agent:kube-apiserver/v1.27.3 (linux/amd64) kubernetes/25b4e43,verb:PUT (17-Jul-2023 23:11:49.004) (total time: 1107ms):
	Trace[2086379880]: ["GuaranteedUpdate etcd3" audit-id:d1d7bb1d-c54d-4133-b3c2-2cdceee0dfbb,key:/leases/kube-system/apiserver-4rseqynxskwzjab6n3ldlcty3y,type:*coordination.Lease,resource:leases.coordination.k8s.io 1107ms (23:11:49.004)
	Trace[2086379880]:  ---"Txn call completed" 1106ms (23:11:50.112)]
	Trace[2086379880]: [1.107905618s] [1.107905618s] END
	I0717 23:11:50.362675       1 trace.go:219] Trace[806591684]: "List" accept:application/json, */*,audit-id:e323772f-69ff-4339-875e-f161b4be7066,client:192.168.39.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kubernetes-dashboard/pods,user-agent:e2e-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (17-Jul-2023 23:11:49.668) (total time: 693ms):
	Trace[806591684]: ["List(recursive=true) etcd3" audit-id:e323772f-69ff-4339-875e-f161b4be7066,key:/pods/kubernetes-dashboard,resourceVersion:,resourceVersionMatch:,limit:0,continue: 693ms (23:11:49.669)]
	Trace[806591684]: [693.678237ms] [693.678237ms] END
	
	* 
	* ==> kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] <==
	* W0717 23:05:49.264328       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:06:18.844429       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:06:19.273243       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:06:48.850819       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:06:49.284306       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:07:18.858656       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:07:19.293088       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:07:48.866830       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:07:49.302277       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:08:18.872557       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:08:19.313399       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:08:48.878369       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:08:49.321964       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:09:18.886466       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:09:19.331730       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:09:48.894427       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:09:49.341334       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:10:18.900311       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:10:19.350122       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:10:48.907470       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:10:49.359729       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:11:18.914403       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:11:19.369678       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:11:48.921466       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:11:49.377993       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] <==
	* I0717 22:51:37.690801       1 node.go:141] Successfully retrieved node IP: 192.168.39.6
	I0717 22:51:37.691314       1 server_others.go:110] "Detected node IP" address="192.168.39.6"
	I0717 22:51:37.691656       1 server_others.go:554] "Using iptables proxy"
	I0717 22:51:37.828981       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 22:51:37.829166       1 server_others.go:192] "Using iptables Proxier"
	I0717 22:51:37.829216       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 22:51:37.829791       1 server.go:658] "Version info" version="v1.27.3"
	I0717 22:51:37.829977       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:51:37.831363       1 config.go:188] "Starting service config controller"
	I0717 22:51:37.831410       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 22:51:37.831444       1 config.go:97] "Starting endpoint slice config controller"
	I0717 22:51:37.831459       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 22:51:37.832034       1 config.go:315] "Starting node config controller"
	I0717 22:51:37.832076       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 22:51:37.931592       1 shared_informer.go:318] Caches are synced for service config
	I0717 22:51:37.931758       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 22:51:37.933663       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] <==
	* I0717 22:51:32.583205       1 serving.go:348] Generated self-signed cert in-memory
	W0717 22:51:35.314570       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 22:51:35.314742       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 22:51:35.314792       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 22:51:35.314829       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 22:51:35.403073       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 22:51:35.407100       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:51:35.424286       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 22:51:35.424760       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:51:35.428670       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 22:51:35.428806       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 22:51:35.525127       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 22:50:43 UTC, ends at Mon 2023-07-17 23:12:15 UTC. --
	Jul 17 23:09:28 no-preload-935524 kubelet[1242]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:09:28 no-preload-935524 kubelet[1242]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:09:28 no-preload-935524 kubelet[1242]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:09:38 no-preload-935524 kubelet[1242]: E0717 23:09:38.713948    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:09:53 no-preload-935524 kubelet[1242]: E0717 23:09:53.714276    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:10:05 no-preload-935524 kubelet[1242]: E0717 23:10:05.713941    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:10:18 no-preload-935524 kubelet[1242]: E0717 23:10:18.713698    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:10:28 no-preload-935524 kubelet[1242]: E0717 23:10:28.731046    1242 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 23:10:28 no-preload-935524 kubelet[1242]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:10:28 no-preload-935524 kubelet[1242]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:10:28 no-preload-935524 kubelet[1242]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:10:33 no-preload-935524 kubelet[1242]: E0717 23:10:33.713955    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:10:44 no-preload-935524 kubelet[1242]: E0717 23:10:44.714351    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:10:58 no-preload-935524 kubelet[1242]: E0717 23:10:58.715257    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:11:09 no-preload-935524 kubelet[1242]: E0717 23:11:09.713137    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:11:21 no-preload-935524 kubelet[1242]: E0717 23:11:21.713294    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:11:28 no-preload-935524 kubelet[1242]: E0717 23:11:28.708212    1242 container_manager_linux.go:515] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jul 17 23:11:28 no-preload-935524 kubelet[1242]: E0717 23:11:28.733241    1242 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 23:11:28 no-preload-935524 kubelet[1242]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:11:28 no-preload-935524 kubelet[1242]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:11:28 no-preload-935524 kubelet[1242]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:11:34 no-preload-935524 kubelet[1242]: E0717 23:11:34.715689    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:11:46 no-preload-935524 kubelet[1242]: E0717 23:11:46.715394    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:11:57 no-preload-935524 kubelet[1242]: E0717 23:11:57.714827    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	Jul 17 23:12:12 no-preload-935524 kubelet[1242]: E0717 23:12:12.726016    1242 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-tlbpl" podUID=7c478efe-4435-45dd-a688-745872fc2918
	
	* 
	* ==> storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] <==
	* I0717 22:51:37.419777       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 22:52:07.430666       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] <==
	* I0717 22:52:08.084080       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 22:52:08.106336       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 22:52:08.106713       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 22:52:25.512166       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 22:52:25.512331       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"86383f04-1a63-40f3-8c65-3b22e03ad414", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-935524_4336aa79-edae-47dc-b9ae-4ebd35f74e08 became leader
	I0717 22:52:25.513206       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-935524_4336aa79-edae-47dc-b9ae-4ebd35f74e08!
	I0717 22:52:25.614204       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-935524_4336aa79-edae-47dc-b9ae-4ebd35f74e08!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-935524 -n no-preload-935524
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-935524 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-tlbpl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-935524 describe pod metrics-server-74d5c6b9c-tlbpl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-935524 describe pod metrics-server-74d5c6b9c-tlbpl: exit status 1 (102.685826ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-tlbpl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-935524 describe pod metrics-server-74d5c6b9c-tlbpl: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (429.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (123.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 23:08:11.892501   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-332820 -n old-k8s-version-332820
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-07-17 23:10:04.155730245 +0000 UTC m=+5367.945516249
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-332820 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-332820 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.965µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-332820 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-332820 -n old-k8s-version-332820
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-332820 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-332820 logs -n 25: (1.617774553s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-431736                                 | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-482945                                        | pause-482945                 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-366864                              | cert-expiration-366864       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| delete  | -p                                                     | disable-driver-mounts-615088 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	|         | disable-driver-mounts-615088                           |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-431736 sudo                            | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-431736                                 | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-332820        | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-571296            | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-935524             | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-504828  | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-332820             | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-571296                 | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 23:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-935524                  | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504828       | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 22:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 23:01 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:47:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:47:37.527061   54649 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:47:37.527212   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:47:37.527221   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 22:47:37.527228   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:47:37.527438   54649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:47:37.527980   54649 out.go:303] Setting JSON to false
	I0717 22:47:37.528901   54649 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9010,"bootTime":1689625048,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:47:37.528964   54649 start.go:138] virtualization: kvm guest
	I0717 22:47:37.531211   54649 out.go:177] * [default-k8s-diff-port-504828] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:47:37.533158   54649 notify.go:220] Checking for updates...
	I0717 22:47:37.533188   54649 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:47:37.535650   54649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:47:37.537120   54649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:47:37.538622   54649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:47:37.540087   54649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:47:37.541460   54649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:47:37.543023   54649 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:47:37.543367   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:47:37.543410   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:47:37.557812   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0717 22:47:37.558215   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:47:37.558854   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:47:37.558880   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:47:37.559209   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:47:37.559422   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:47:37.559654   54649 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:47:37.559930   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:47:37.559964   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:47:37.574919   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I0717 22:47:37.575395   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:47:37.575884   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:47:37.575907   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:47:37.576216   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:47:37.576373   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:47:37.609134   54649 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 22:47:37.610479   54649 start.go:298] selected driver: kvm2
	I0717 22:47:37.610497   54649 start.go:880] validating driver "kvm2" against &{Name:default-k8s-diff-port-504828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:def
ault-k8s-diff-port-504828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:47:37.610629   54649 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:47:37.611264   54649 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:37.611363   54649 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 22:47:37.626733   54649 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 22:47:37.627071   54649 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 22:47:37.627102   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:47:37.627113   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:47:37.627123   54649 start_flags.go:319] config:
	{Name:default-k8s-diff-port-504828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-504828 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:47:37.627251   54649 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:37.629965   54649 out.go:177] * Starting control plane node default-k8s-diff-port-504828 in cluster default-k8s-diff-port-504828
	I0717 22:47:32.766201   54573 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:47:32.766339   54573 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/config.json ...
	I0717 22:47:32.766467   54573 cache.go:107] acquiring lock: {Name:mk01bc74ef42cddd6cd05b75ec900cb2a05e15de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766476   54573 cache.go:107] acquiring lock: {Name:mk672b2225edd60ecd8aa8e076d6e3579923204f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766504   54573 cache.go:107] acquiring lock: {Name:mk1ec8b402c7d0685d25060e32c2f651eb2916fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766539   54573 cache.go:107] acquiring lock: {Name:mkd18484b6a11488d3306ab3200047f68a7be660 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766573   54573 start.go:365] acquiring machines lock for no-preload-935524: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:47:32.766576   54573 cache.go:107] acquiring lock: {Name:mkb3015efe537f010ace1f299991daca38e60845 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766610   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0717 22:47:32.766586   54573 cache.go:107] acquiring lock: {Name:mkc8c0d0fa55ce47999adb3e73b20a24cafac7c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766637   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 exists
	I0717 22:47:32.766653   54573 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0" took 100.155µs
	I0717 22:47:32.766659   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0717 22:47:32.766648   54573 cache.go:107] acquiring lock: {Name:mke2add190f322b938de65cf40269b08b3acfca3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766656   54573 cache.go:107] acquiring lock: {Name:mk075beefd466e66915afc5543af4c3b175d5d80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:47:32.766681   54573 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 187.554µs
	I0717 22:47:32.766710   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0717 22:47:32.766670   54573 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0717 22:47:32.766735   54573 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1" took 88.679µs
	I0717 22:47:32.766748   54573 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0717 22:47:32.766629   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0717 22:47:32.766763   54573 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3" took 231.824µs
	I0717 22:47:32.766771   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0717 22:47:32.766717   54573 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0717 22:47:32.766570   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 22:47:32.766780   54573 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3" took 194.904µs
	I0717 22:47:32.766790   54573 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0717 22:47:32.766787   54573 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 329.218µs
	I0717 22:47:32.766631   54573 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3" took 161.864µs
	I0717 22:47:32.766805   54573 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0717 22:47:32.766774   54573 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0717 22:47:32.766672   54573 cache.go:115] /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0717 22:47:32.766820   54573 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3" took 238.693µs
	I0717 22:47:32.766828   54573 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0717 22:47:32.766797   54573 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 22:47:32.766834   54573 cache.go:87] Successfully saved all images to host disk.
	I0717 22:47:37.631294   54649 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:47:37.631336   54649 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 22:47:37.631348   54649 cache.go:57] Caching tarball of preloaded images
	I0717 22:47:37.631442   54649 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 22:47:37.631456   54649 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 22:47:37.631555   54649 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/config.json ...
	I0717 22:47:37.631742   54649 start.go:365] acquiring machines lock for default-k8s-diff-port-504828: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:47:37.905723   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:40.977774   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:47.057804   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:50.129875   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:56.209815   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:47:59.281810   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:05.361786   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:08.433822   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:14.513834   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:17.585682   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:23.665811   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:26.737819   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:32.817800   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:35.889839   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:41.969818   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:45.041851   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:51.121816   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:48:54.193896   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:00.273812   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:03.345848   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:09.425796   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:12.497873   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:18.577847   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:21.649767   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:27.729823   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:30.801947   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:36.881840   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:39.953832   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:46.033825   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:49.105862   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:55.185814   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:49:58.257881   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:50:04.337852   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:50:07.409871   53870 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.149:22: connect: no route to host
	I0717 22:50:10.413979   54248 start.go:369] acquired machines lock for "embed-certs-571296" in 3m17.321305769s
	I0717 22:50:10.414028   54248 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:50:10.414048   54248 fix.go:54] fixHost starting: 
	I0717 22:50:10.414400   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:50:10.414437   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:50:10.428711   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0717 22:50:10.429132   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:50:10.429628   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:50:10.429671   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:50:10.430088   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:50:10.430301   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:10.430491   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:50:10.432357   54248 fix.go:102] recreateIfNeeded on embed-certs-571296: state=Stopped err=<nil>
	I0717 22:50:10.432375   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	W0717 22:50:10.432552   54248 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:50:10.434264   54248 out.go:177] * Restarting existing kvm2 VM for "embed-certs-571296" ...
	I0717 22:50:10.411622   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:50:10.411707   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:50:10.413827   53870 machine.go:91] provisioned docker machine in 4m37.430605556s
	I0717 22:50:10.413860   53870 fix.go:56] fixHost completed within 4m37.451042302s
	I0717 22:50:10.413870   53870 start.go:83] releasing machines lock for "old-k8s-version-332820", held for 4m37.451061598s
	W0717 22:50:10.413907   53870 start.go:672] error starting host: provision: host is not running
	W0717 22:50:10.414004   53870 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 22:50:10.414014   53870 start.go:687] Will try again in 5 seconds ...
	I0717 22:50:10.435984   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Start
	I0717 22:50:10.436181   54248 main.go:141] libmachine: (embed-certs-571296) Ensuring networks are active...
	I0717 22:50:10.436939   54248 main.go:141] libmachine: (embed-certs-571296) Ensuring network default is active
	I0717 22:50:10.437252   54248 main.go:141] libmachine: (embed-certs-571296) Ensuring network mk-embed-certs-571296 is active
	I0717 22:50:10.437751   54248 main.go:141] libmachine: (embed-certs-571296) Getting domain xml...
	I0717 22:50:10.438706   54248 main.go:141] libmachine: (embed-certs-571296) Creating domain...
	I0717 22:50:10.795037   54248 main.go:141] libmachine: (embed-certs-571296) Waiting to get IP...
	I0717 22:50:10.795808   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:10.796178   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:10.796237   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:10.796156   55063 retry.go:31] will retry after 189.390538ms: waiting for machine to come up
	I0717 22:50:10.987904   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:10.988435   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:10.988466   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:10.988382   55063 retry.go:31] will retry after 260.75291ms: waiting for machine to come up
	I0717 22:50:11.250849   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:11.251279   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:11.251323   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:11.251218   55063 retry.go:31] will retry after 421.317262ms: waiting for machine to come up
	I0717 22:50:11.673813   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:11.674239   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:11.674259   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:11.674206   55063 retry.go:31] will retry after 512.64366ms: waiting for machine to come up
	I0717 22:50:12.188810   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:12.189271   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:12.189298   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:12.189222   55063 retry.go:31] will retry after 489.02322ms: waiting for machine to come up
	I0717 22:50:12.679695   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:12.680108   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:12.680137   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:12.680012   55063 retry.go:31] will retry after 589.269905ms: waiting for machine to come up
	I0717 22:50:15.415915   53870 start.go:365] acquiring machines lock for old-k8s-version-332820: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 22:50:13.270668   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:13.271039   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:13.271069   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:13.270984   55063 retry.go:31] will retry after 722.873214ms: waiting for machine to come up
	I0717 22:50:13.996101   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:13.996681   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:13.996711   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:13.996623   55063 retry.go:31] will retry after 1.381840781s: waiting for machine to come up
	I0717 22:50:15.379777   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:15.380169   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:15.380197   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:15.380118   55063 retry.go:31] will retry after 1.335563851s: waiting for machine to come up
	I0717 22:50:16.718113   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:16.718637   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:16.718660   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:16.718575   55063 retry.go:31] will retry after 1.96500286s: waiting for machine to come up
	I0717 22:50:18.685570   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:18.686003   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:18.686023   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:18.685960   55063 retry.go:31] will retry after 2.007114073s: waiting for machine to come up
	I0717 22:50:20.694500   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:20.694961   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:20.694984   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:20.694916   55063 retry.go:31] will retry after 3.344996038s: waiting for machine to come up
	I0717 22:50:24.043423   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:24.043777   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:24.043799   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:24.043732   55063 retry.go:31] will retry after 3.031269711s: waiting for machine to come up
	I0717 22:50:27.077029   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:27.077447   54248 main.go:141] libmachine: (embed-certs-571296) DBG | unable to find current IP address of domain embed-certs-571296 in network mk-embed-certs-571296
	I0717 22:50:27.077493   54248 main.go:141] libmachine: (embed-certs-571296) DBG | I0717 22:50:27.077379   55063 retry.go:31] will retry after 3.787872248s: waiting for machine to come up
	I0717 22:50:32.158403   54573 start.go:369] acquired machines lock for "no-preload-935524" in 2m59.391772757s
	I0717 22:50:32.158456   54573 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:50:32.158478   54573 fix.go:54] fixHost starting: 
	I0717 22:50:32.158917   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:50:32.158960   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:50:32.177532   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0717 22:50:32.177962   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:50:32.178564   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:50:32.178596   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:50:32.178981   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:50:32.179197   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:32.179381   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:50:32.181079   54573 fix.go:102] recreateIfNeeded on no-preload-935524: state=Stopped err=<nil>
	I0717 22:50:32.181104   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	W0717 22:50:32.181273   54573 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:50:32.183782   54573 out.go:177] * Restarting existing kvm2 VM for "no-preload-935524" ...
	I0717 22:50:32.185307   54573 main.go:141] libmachine: (no-preload-935524) Calling .Start
	I0717 22:50:32.185504   54573 main.go:141] libmachine: (no-preload-935524) Ensuring networks are active...
	I0717 22:50:32.186119   54573 main.go:141] libmachine: (no-preload-935524) Ensuring network default is active
	I0717 22:50:32.186543   54573 main.go:141] libmachine: (no-preload-935524) Ensuring network mk-no-preload-935524 is active
	I0717 22:50:32.186958   54573 main.go:141] libmachine: (no-preload-935524) Getting domain xml...
	I0717 22:50:32.187647   54573 main.go:141] libmachine: (no-preload-935524) Creating domain...
	I0717 22:50:32.567258   54573 main.go:141] libmachine: (no-preload-935524) Waiting to get IP...
	I0717 22:50:32.568423   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:32.568941   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:32.569021   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:32.568937   55160 retry.go:31] will retry after 239.368857ms: waiting for machine to come up
	I0717 22:50:30.866978   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.867476   54248 main.go:141] libmachine: (embed-certs-571296) Found IP for machine: 192.168.61.179
	I0717 22:50:30.867494   54248 main.go:141] libmachine: (embed-certs-571296) Reserving static IP address...
	I0717 22:50:30.867507   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has current primary IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.867958   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "embed-certs-571296", mac: "52:54:00:e0:4c:e5", ip: "192.168.61.179"} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.867994   54248 main.go:141] libmachine: (embed-certs-571296) Reserved static IP address: 192.168.61.179
	I0717 22:50:30.868012   54248 main.go:141] libmachine: (embed-certs-571296) DBG | skip adding static IP to network mk-embed-certs-571296 - found existing host DHCP lease matching {name: "embed-certs-571296", mac: "52:54:00:e0:4c:e5", ip: "192.168.61.179"}
	I0717 22:50:30.868034   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Getting to WaitForSSH function...
	I0717 22:50:30.868052   54248 main.go:141] libmachine: (embed-certs-571296) Waiting for SSH to be available...
	I0717 22:50:30.870054   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.870366   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.870402   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.870514   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Using SSH client type: external
	I0717 22:50:30.870545   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa (-rw-------)
	I0717 22:50:30.870596   54248 main.go:141] libmachine: (embed-certs-571296) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:50:30.870623   54248 main.go:141] libmachine: (embed-certs-571296) DBG | About to run SSH command:
	I0717 22:50:30.870637   54248 main.go:141] libmachine: (embed-certs-571296) DBG | exit 0
	I0717 22:50:30.965028   54248 main.go:141] libmachine: (embed-certs-571296) DBG | SSH cmd err, output: <nil>: 
	I0717 22:50:30.965413   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetConfigRaw
	I0717 22:50:30.966103   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:30.968689   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.969031   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.969068   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.969282   54248 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/config.json ...
	I0717 22:50:30.969474   54248 machine.go:88] provisioning docker machine ...
	I0717 22:50:30.969491   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:30.969725   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetMachineName
	I0717 22:50:30.969910   54248 buildroot.go:166] provisioning hostname "embed-certs-571296"
	I0717 22:50:30.969928   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetMachineName
	I0717 22:50:30.970057   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:30.972055   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.972390   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:30.972416   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:30.972590   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:30.972732   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:30.972851   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:30.973006   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:30.973150   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:30.973572   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:30.973586   54248 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-571296 && echo "embed-certs-571296" | sudo tee /etc/hostname
	I0717 22:50:31.119085   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-571296
	
	I0717 22:50:31.119112   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.121962   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.122254   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.122287   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.122439   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.122634   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.122824   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.122969   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.123140   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:31.123581   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:31.123607   54248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-571296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-571296/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-571296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:50:31.262347   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:50:31.262373   54248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:50:31.262422   54248 buildroot.go:174] setting up certificates
	I0717 22:50:31.262431   54248 provision.go:83] configureAuth start
	I0717 22:50:31.262443   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetMachineName
	I0717 22:50:31.262717   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:31.265157   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.265555   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.265582   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.265716   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.267966   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.268299   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.268334   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.268482   54248 provision.go:138] copyHostCerts
	I0717 22:50:31.268529   54248 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:50:31.268538   54248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:50:31.268602   54248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:50:31.268686   54248 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:50:31.268698   54248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:50:31.268720   54248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:50:31.268769   54248 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:50:31.268776   54248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:50:31.268794   54248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:50:31.268837   54248 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.embed-certs-571296 san=[192.168.61.179 192.168.61.179 localhost 127.0.0.1 minikube embed-certs-571296]
	I0717 22:50:31.374737   54248 provision.go:172] copyRemoteCerts
	I0717 22:50:31.374796   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:50:31.374818   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.377344   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.377664   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.377700   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.377873   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.378063   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.378223   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.378364   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:31.474176   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:50:31.498974   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 22:50:31.522794   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:50:31.546276   54248 provision.go:86] duration metric: configureAuth took 283.830107ms
	I0717 22:50:31.546313   54248 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:50:31.546521   54248 config.go:182] Loaded profile config "embed-certs-571296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:50:31.546603   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.549119   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.549485   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.549544   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.549716   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.549898   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.550056   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.550206   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.550376   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:31.550819   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:31.550837   54248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:50:31.884933   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:50:31.884960   54248 machine.go:91] provisioned docker machine in 915.473611ms
	I0717 22:50:31.884973   54248 start.go:300] post-start starting for "embed-certs-571296" (driver="kvm2")
	I0717 22:50:31.884985   54248 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:50:31.885011   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:31.885399   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:50:31.885444   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:31.887965   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.888302   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:31.888338   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:31.888504   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:31.888710   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:31.888862   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:31.888988   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:31.983951   54248 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:50:31.988220   54248 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:50:31.988248   54248 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:50:31.988334   54248 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:50:31.988429   54248 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:50:31.988543   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:50:31.997933   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:50:32.020327   54248 start.go:303] post-start completed in 135.337882ms
	I0717 22:50:32.020353   54248 fix.go:56] fixHost completed within 21.60630369s
	I0717 22:50:32.020377   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:32.023026   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.023382   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.023415   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.023665   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:32.023873   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.024047   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.024193   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:32.024348   54248 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:32.024722   54248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.61.179 22 <nil> <nil>}
	I0717 22:50:32.024734   54248 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:50:32.158218   54248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634232.105028258
	
	I0717 22:50:32.158252   54248 fix.go:206] guest clock: 1689634232.105028258
	I0717 22:50:32.158262   54248 fix.go:219] Guest: 2023-07-17 22:50:32.105028258 +0000 UTC Remote: 2023-07-17 22:50:32.020356843 +0000 UTC m=+219.067919578 (delta=84.671415ms)
	I0717 22:50:32.158286   54248 fix.go:190] guest clock delta is within tolerance: 84.671415ms
	I0717 22:50:32.158292   54248 start.go:83] releasing machines lock for "embed-certs-571296", held for 21.74428315s
	I0717 22:50:32.158327   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.158592   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:32.161034   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.161385   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.161418   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.161609   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.162089   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.162247   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:50:32.162322   54248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:50:32.162368   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:32.162453   54248 ssh_runner.go:195] Run: cat /version.json
	I0717 22:50:32.162474   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:50:32.165101   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165235   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165564   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.165591   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165615   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:32.165637   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:32.165688   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:32.165806   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:50:32.165877   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.165995   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:50:32.166172   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:32.166181   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:50:32.166307   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:32.166363   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:50:32.285102   54248 ssh_runner.go:195] Run: systemctl --version
	I0717 22:50:32.291185   54248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:50:32.437104   54248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:50:32.443217   54248 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:50:32.443291   54248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:50:32.461161   54248 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:50:32.461181   54248 start.go:466] detecting cgroup driver to use...
	I0717 22:50:32.461237   54248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:50:32.483011   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:50:32.497725   54248 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:50:32.497788   54248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:50:32.512008   54248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:50:32.532595   54248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:50:32.654303   54248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:50:32.783140   54248 docker.go:212] disabling docker service ...
	I0717 22:50:32.783209   54248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:50:32.795822   54248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:50:32.809540   54248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:50:32.923229   54248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:50:33.025589   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:50:33.039420   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:50:33.056769   54248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:50:33.056831   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.066205   54248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:50:33.066277   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.075559   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.084911   54248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:33.094270   54248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:50:33.103819   54248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:50:33.112005   54248 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:50:33.112070   54248 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:50:33.125459   54248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:50:33.134481   54248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:50:33.240740   54248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:50:33.418504   54248 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:50:33.418576   54248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:50:33.424143   54248 start.go:534] Will wait 60s for crictl version
	I0717 22:50:33.424202   54248 ssh_runner.go:195] Run: which crictl
	I0717 22:50:33.428330   54248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:50:33.465318   54248 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:50:33.465403   54248 ssh_runner.go:195] Run: crio --version
	I0717 22:50:33.516467   54248 ssh_runner.go:195] Run: crio --version
	I0717 22:50:33.569398   54248 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:50:32.810512   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:32.811060   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:32.811095   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:32.810988   55160 retry.go:31] will retry after 309.941434ms: waiting for machine to come up
	I0717 22:50:33.122633   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:33.123092   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:33.123138   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:33.123046   55160 retry.go:31] will retry after 487.561142ms: waiting for machine to come up
	I0717 22:50:33.611932   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:33.612512   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:33.612542   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:33.612485   55160 retry.go:31] will retry after 367.897327ms: waiting for machine to come up
	I0717 22:50:33.981820   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:33.982279   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:33.982326   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:33.982214   55160 retry.go:31] will retry after 630.28168ms: waiting for machine to come up
	I0717 22:50:34.614129   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:34.614625   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:34.614665   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:34.614569   55160 retry.go:31] will retry after 677.033607ms: waiting for machine to come up
	I0717 22:50:35.292873   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:35.293409   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:35.293443   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:35.293360   55160 retry.go:31] will retry after 1.011969157s: waiting for machine to come up
	I0717 22:50:36.306452   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:36.306895   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:36.306924   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:36.306836   55160 retry.go:31] will retry after 1.035213701s: waiting for machine to come up
	I0717 22:50:37.343727   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:37.344195   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:37.344227   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:37.344143   55160 retry.go:31] will retry after 1.820372185s: waiting for machine to come up
	I0717 22:50:33.571037   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetIP
	I0717 22:50:33.574233   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:33.574758   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:50:33.574796   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:50:33.575014   54248 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 22:50:33.579342   54248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:50:33.591600   54248 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:50:33.591678   54248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:50:33.625951   54248 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:50:33.626026   54248 ssh_runner.go:195] Run: which lz4
	I0717 22:50:33.630581   54248 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:50:33.635135   54248 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:50:33.635171   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 22:50:35.389650   54248 crio.go:444] Took 1.759110 seconds to copy over tarball
	I0717 22:50:35.389728   54248 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:50:39.166682   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:39.167111   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:39.167146   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:39.167068   55160 retry.go:31] will retry after 1.739687633s: waiting for machine to come up
	I0717 22:50:40.909258   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:40.909752   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:40.909784   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:40.909694   55160 retry.go:31] will retry after 2.476966629s: waiting for machine to come up
	I0717 22:50:38.336151   54248 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.946397065s)
	I0717 22:50:38.336176   54248 crio.go:451] Took 2.946502 seconds to extract the tarball
	I0717 22:50:38.336184   54248 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:50:38.375618   54248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:50:38.425357   54248 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:50:38.425377   54248 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:50:38.425449   54248 ssh_runner.go:195] Run: crio config
	I0717 22:50:38.511015   54248 cni.go:84] Creating CNI manager for ""
	I0717 22:50:38.511040   54248 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:50:38.511050   54248 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:50:38.511067   54248 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.179 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-571296 NodeName:embed-certs-571296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:50:38.511213   54248 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-571296"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:50:38.511287   54248 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-571296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-571296 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:50:38.511340   54248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:50:38.522373   54248 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:50:38.522432   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:50:38.532894   54248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0717 22:50:38.550814   54248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:50:38.567038   54248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0717 22:50:38.583844   54248 ssh_runner.go:195] Run: grep 192.168.61.179	control-plane.minikube.internal$ /etc/hosts
	I0717 22:50:38.587687   54248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:50:38.600458   54248 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296 for IP: 192.168.61.179
	I0717 22:50:38.600490   54248 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:50:38.600617   54248 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:50:38.600659   54248 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:50:38.600721   54248 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/client.key
	I0717 22:50:38.600774   54248 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/apiserver.key.1b57fe25
	I0717 22:50:38.600820   54248 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/proxy-client.key
	I0717 22:50:38.600929   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:50:38.600955   54248 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:50:38.600966   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:50:38.600986   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:50:38.601017   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:50:38.601050   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:50:38.601093   54248 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:50:38.601734   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:50:38.627490   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:50:38.654423   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:50:38.682997   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/embed-certs-571296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 22:50:38.712432   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:50:38.742901   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:50:38.768966   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:50:38.794778   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:50:38.819537   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:50:38.846730   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:50:38.870806   54248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:50:38.894883   54248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:50:38.911642   54248 ssh_runner.go:195] Run: openssl version
	I0717 22:50:38.917551   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:50:38.928075   54248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:50:38.932832   54248 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:50:38.932888   54248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:50:38.938574   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:50:38.948446   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:50:38.958543   54248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:50:38.963637   54248 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:50:38.963687   54248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:50:38.969460   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:50:38.979718   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:50:38.989796   54248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:50:38.994721   54248 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:50:38.994779   54248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:50:39.000394   54248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:50:39.011176   54248 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:50:39.016792   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:50:39.022959   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:50:39.029052   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:50:39.035096   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:50:39.040890   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:50:39.047007   54248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:50:39.053316   54248 kubeadm.go:404] StartCluster: {Name:embed-certs-571296 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-571296 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:50:39.053429   54248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:50:39.053479   54248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:50:39.082896   54248 cri.go:89] found id: ""
	I0717 22:50:39.082981   54248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:50:39.092999   54248 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:50:39.093021   54248 kubeadm.go:636] restartCluster start
	I0717 22:50:39.093076   54248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:50:39.102254   54248 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:39.103361   54248 kubeconfig.go:92] found "embed-certs-571296" server: "https://192.168.61.179:8443"
	I0717 22:50:39.105846   54248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:50:39.114751   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:39.114825   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:39.125574   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:39.626315   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:39.626406   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:39.637943   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:40.126535   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:40.126643   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:40.139075   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:40.626167   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:40.626306   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:40.638180   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:41.125818   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:41.125919   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:41.137569   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:41.625798   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:41.625900   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:41.637416   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:42.125972   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:42.126076   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:42.137316   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:42.625866   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:42.625964   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:42.637524   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:43.388908   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:43.389400   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:43.389434   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:43.389373   55160 retry.go:31] will retry after 2.639442454s: waiting for machine to come up
	I0717 22:50:46.032050   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:46.032476   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:46.032510   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:46.032419   55160 retry.go:31] will retry after 2.750548097s: waiting for machine to come up
	I0717 22:50:43.126317   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:43.126425   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:43.137978   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:43.626637   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:43.626751   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:43.638260   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:44.125834   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:44.125922   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:44.136925   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:44.626547   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:44.626647   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:44.638426   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:45.125978   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:45.126061   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:45.137496   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:45.626448   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:45.626511   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:45.638236   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:46.125776   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:46.125849   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:46.137916   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:46.626561   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:46.626674   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:46.638555   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:47.126090   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:47.126210   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:47.138092   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:47.626721   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:47.626802   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:47.637828   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:48.785507   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:48.785955   54573 main.go:141] libmachine: (no-preload-935524) DBG | unable to find current IP address of domain no-preload-935524 in network mk-no-preload-935524
	I0717 22:50:48.785987   54573 main.go:141] libmachine: (no-preload-935524) DBG | I0717 22:50:48.785912   55160 retry.go:31] will retry after 4.05132206s: waiting for machine to come up
	I0717 22:50:48.126359   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:48.126438   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:48.137826   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:48.626413   54248 api_server.go:166] Checking apiserver status ...
	I0717 22:50:48.626507   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:50:48.638354   54248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:50:49.114916   54248 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:50:49.114971   54248 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:50:49.114981   54248 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:50:49.115054   54248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:50:49.149465   54248 cri.go:89] found id: ""
	I0717 22:50:49.149558   54248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:50:49.165197   54248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:50:49.174386   54248 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:50:49.174452   54248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:50:49.183137   54248 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:50:49.183162   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:49.294495   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.169663   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.373276   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.485690   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:50.551312   54248 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:50:50.551389   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:51.066760   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:51.566423   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:52.066949   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:52.566304   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:54.227701   54649 start.go:369] acquired machines lock for "default-k8s-diff-port-504828" in 3m16.595911739s
	I0717 22:50:54.227764   54649 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:50:54.227786   54649 fix.go:54] fixHost starting: 
	I0717 22:50:54.228206   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:50:54.228246   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:50:54.245721   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0717 22:50:54.246143   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:50:54.246746   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:50:54.246783   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:50:54.247139   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:50:54.247353   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:50:54.247512   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:50:54.249590   54649 fix.go:102] recreateIfNeeded on default-k8s-diff-port-504828: state=Stopped err=<nil>
	I0717 22:50:54.249630   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	W0717 22:50:54.249835   54649 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:50:54.251932   54649 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-504828" ...
	I0717 22:50:52.838478   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.839101   54573 main.go:141] libmachine: (no-preload-935524) Found IP for machine: 192.168.39.6
	I0717 22:50:52.839120   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has current primary IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.839129   54573 main.go:141] libmachine: (no-preload-935524) Reserving static IP address...
	I0717 22:50:52.839689   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "no-preload-935524", mac: "52:54:00:dc:7e:aa", ip: "192.168.39.6"} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.839724   54573 main.go:141] libmachine: (no-preload-935524) DBG | skip adding static IP to network mk-no-preload-935524 - found existing host DHCP lease matching {name: "no-preload-935524", mac: "52:54:00:dc:7e:aa", ip: "192.168.39.6"}
	I0717 22:50:52.839737   54573 main.go:141] libmachine: (no-preload-935524) Reserved static IP address: 192.168.39.6
	I0717 22:50:52.839752   54573 main.go:141] libmachine: (no-preload-935524) Waiting for SSH to be available...
	I0717 22:50:52.839769   54573 main.go:141] libmachine: (no-preload-935524) DBG | Getting to WaitForSSH function...
	I0717 22:50:52.842402   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.842739   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.842773   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.842861   54573 main.go:141] libmachine: (no-preload-935524) DBG | Using SSH client type: external
	I0717 22:50:52.842889   54573 main.go:141] libmachine: (no-preload-935524) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa (-rw-------)
	I0717 22:50:52.842929   54573 main.go:141] libmachine: (no-preload-935524) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:50:52.842947   54573 main.go:141] libmachine: (no-preload-935524) DBG | About to run SSH command:
	I0717 22:50:52.842962   54573 main.go:141] libmachine: (no-preload-935524) DBG | exit 0
	I0717 22:50:52.942283   54573 main.go:141] libmachine: (no-preload-935524) DBG | SSH cmd err, output: <nil>: 
	I0717 22:50:52.942665   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetConfigRaw
	I0717 22:50:52.943403   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:52.946152   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.946546   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.946587   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.946823   54573 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/config.json ...
	I0717 22:50:52.947043   54573 machine.go:88] provisioning docker machine ...
	I0717 22:50:52.947062   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:52.947259   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetMachineName
	I0717 22:50:52.947411   54573 buildroot.go:166] provisioning hostname "no-preload-935524"
	I0717 22:50:52.947431   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetMachineName
	I0717 22:50:52.947556   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:52.950010   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.950364   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:52.950394   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:52.950539   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:52.950709   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:52.950849   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:52.950980   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:52.951165   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:52.951809   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:52.951831   54573 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-935524 && echo "no-preload-935524" | sudo tee /etc/hostname
	I0717 22:50:53.102629   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-935524
	
	I0717 22:50:53.102665   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.105306   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.105689   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.105724   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.105856   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.106048   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.106219   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.106362   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.106504   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:53.106886   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:53.106904   54573 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-935524' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-935524/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-935524' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:50:53.250601   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:50:53.250631   54573 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:50:53.250711   54573 buildroot.go:174] setting up certificates
	I0717 22:50:53.250721   54573 provision.go:83] configureAuth start
	I0717 22:50:53.250735   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetMachineName
	I0717 22:50:53.251063   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:53.253864   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.254309   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.254344   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.254513   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.256938   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.257385   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.257429   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.257534   54573 provision.go:138] copyHostCerts
	I0717 22:50:53.257595   54573 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:50:53.257607   54573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:50:53.257682   54573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:50:53.257804   54573 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:50:53.257816   54573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:50:53.257843   54573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:50:53.257929   54573 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:50:53.257938   54573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:50:53.257964   54573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:50:53.258060   54573 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.no-preload-935524 san=[192.168.39.6 192.168.39.6 localhost 127.0.0.1 minikube no-preload-935524]
	I0717 22:50:53.392234   54573 provision.go:172] copyRemoteCerts
	I0717 22:50:53.392307   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:50:53.392335   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.395139   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.395529   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.395560   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.395734   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.395932   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.396109   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.396268   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:53.495214   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:50:53.523550   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 22:50:53.552276   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:50:53.576026   54573 provision.go:86] duration metric: configureAuth took 325.291158ms
	I0717 22:50:53.576057   54573 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:50:53.576313   54573 config.go:182] Loaded profile config "no-preload-935524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:50:53.576414   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.578969   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.579363   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.579404   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.579585   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.579783   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.579943   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.580113   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.580302   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:53.580952   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:53.580979   54573 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:50:53.948696   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:50:53.948725   54573 machine.go:91] provisioned docker machine in 1.001666705s
	I0717 22:50:53.948737   54573 start.go:300] post-start starting for "no-preload-935524" (driver="kvm2")
	I0717 22:50:53.948756   54573 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:50:53.948788   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:53.949144   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:50:53.949179   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:53.951786   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.952221   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:53.952255   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:53.952468   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:53.952642   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:53.952863   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:53.953001   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:54.054995   54573 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:50:54.060431   54573 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:50:54.060455   54573 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:50:54.060524   54573 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:50:54.060624   54573 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:50:54.060737   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:50:54.072249   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:50:54.094894   54573 start.go:303] post-start completed in 146.143243ms
	I0717 22:50:54.094919   54573 fix.go:56] fixHost completed within 21.936441056s
	I0717 22:50:54.094937   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:54.097560   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.097893   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.097926   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.098153   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:54.098377   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.098561   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.098729   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:54.098899   54573 main.go:141] libmachine: Using SSH client type: native
	I0717 22:50:54.099308   54573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0717 22:50:54.099323   54573 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:50:54.227537   54573 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634254.168158155
	
	I0717 22:50:54.227562   54573 fix.go:206] guest clock: 1689634254.168158155
	I0717 22:50:54.227573   54573 fix.go:219] Guest: 2023-07-17 22:50:54.168158155 +0000 UTC Remote: 2023-07-17 22:50:54.094922973 +0000 UTC m=+201.463147612 (delta=73.235182ms)
	I0717 22:50:54.227598   54573 fix.go:190] guest clock delta is within tolerance: 73.235182ms
	I0717 22:50:54.227604   54573 start.go:83] releasing machines lock for "no-preload-935524", held for 22.06917115s
	I0717 22:50:54.227636   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.227891   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:54.230831   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.231223   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.231262   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.231367   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.231932   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.232109   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:50:54.232181   54573 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:50:54.232226   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:54.232322   54573 ssh_runner.go:195] Run: cat /version.json
	I0717 22:50:54.232354   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:50:54.235001   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235351   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235429   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.235463   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235600   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:54.235791   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.235825   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:54.235857   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:54.235969   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:50:54.236027   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:54.236119   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:50:54.236253   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:50:54.236254   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:54.236392   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:50:54.360160   54573 ssh_runner.go:195] Run: systemctl --version
	I0717 22:50:54.367093   54573 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:50:54.523956   54573 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:50:54.531005   54573 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:50:54.531121   54573 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:50:54.548669   54573 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:50:54.548697   54573 start.go:466] detecting cgroup driver to use...
	I0717 22:50:54.548768   54573 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:50:54.564722   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:50:54.577237   54573 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:50:54.577303   54573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:50:54.590625   54573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:50:54.603897   54573 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:50:54.731958   54573 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:50:54.862565   54573 docker.go:212] disabling docker service ...
	I0717 22:50:54.862632   54573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:50:54.875946   54573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:50:54.888617   54573 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:50:54.997410   54573 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:50:55.110094   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:50:55.123729   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:50:55.144670   54573 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:50:55.144754   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.154131   54573 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:50:55.154193   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.164669   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.177189   54573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:50:55.189292   54573 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:50:55.204022   54573 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:50:55.212942   54573 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:50:55.213006   54573 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:50:55.232951   54573 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:50:55.246347   54573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:50:55.366491   54573 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:50:55.544250   54573 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:50:55.544336   54573 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:50:55.550952   54573 start.go:534] Will wait 60s for crictl version
	I0717 22:50:55.551021   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:55.558527   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:50:55.602591   54573 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:50:55.602687   54573 ssh_runner.go:195] Run: crio --version
	I0717 22:50:55.663719   54573 ssh_runner.go:195] Run: crio --version
	I0717 22:50:55.726644   54573 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:50:54.253440   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Start
	I0717 22:50:54.253678   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Ensuring networks are active...
	I0717 22:50:54.254444   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Ensuring network default is active
	I0717 22:50:54.254861   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Ensuring network mk-default-k8s-diff-port-504828 is active
	I0717 22:50:54.255337   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Getting domain xml...
	I0717 22:50:54.256194   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Creating domain...
	I0717 22:50:54.643844   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting to get IP...
	I0717 22:50:54.644894   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.645362   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.645465   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:54.645359   55310 retry.go:31] will retry after 296.655364ms: waiting for machine to come up
	I0717 22:50:54.943927   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.944465   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:54.944500   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:54.944408   55310 retry.go:31] will retry after 351.801959ms: waiting for machine to come up
	I0717 22:50:55.298164   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.298678   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.298710   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:55.298642   55310 retry.go:31] will retry after 354.726659ms: waiting for machine to come up
	I0717 22:50:55.655122   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.655582   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:55.655710   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:55.655633   55310 retry.go:31] will retry after 540.353024ms: waiting for machine to come up
	I0717 22:50:56.197370   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.197929   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.197963   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:56.197897   55310 retry.go:31] will retry after 602.667606ms: waiting for machine to come up
	I0717 22:50:56.802746   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.803401   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:56.803431   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:56.803344   55310 retry.go:31] will retry after 675.557445ms: waiting for machine to come up
	I0717 22:50:57.480002   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:57.480476   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:57.480508   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:57.480423   55310 retry.go:31] will retry after 898.307594ms: waiting for machine to come up
	I0717 22:50:55.728247   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetIP
	I0717 22:50:55.731423   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:55.731871   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:50:55.731910   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:50:55.732109   54573 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 22:50:55.736921   54573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:50:55.751844   54573 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:50:55.751895   54573 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:50:55.787286   54573 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:50:55.787316   54573 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.27.3 registry.k8s.io/kube-controller-manager:v1.27.3 registry.k8s.io/kube-scheduler:v1.27.3 registry.k8s.io/kube-proxy:v1.27.3 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.7-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 22:50:55.787387   54573 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:55.787398   54573 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:55.787418   54573 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:55.787450   54573 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:55.787589   54573 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:55.787602   54573 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0717 22:50:55.787630   54573 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:55.787648   54573 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:55.788865   54573 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:55.788870   54573 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0717 22:50:55.788875   54573 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:55.788919   54573 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:55.788929   54573 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:55.788869   54573 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:55.788955   54573 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:55.789279   54573 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:55.956462   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:55.959183   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:55.960353   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:55.961871   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:55.963472   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0717 22:50:55.970739   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:55.992476   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:56.099305   54573 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.27.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.27.3" does not exist at hash "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a" in container runtime
	I0717 22:50:56.099353   54573 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:56.099399   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.144906   54573 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:56.175359   54573 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.27.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.27.3" does not exist at hash "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a" in container runtime
	I0717 22:50:56.175407   54573 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:56.175409   54573 cache_images.go:116] "registry.k8s.io/etcd:3.5.7-0" needs transfer: "registry.k8s.io/etcd:3.5.7-0" does not exist at hash "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681" in container runtime
	I0717 22:50:56.175444   54573 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:56.175508   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.175549   54573 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0717 22:50:56.175452   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.175577   54573 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:56.175622   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.205829   54573 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.27.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.27.3" does not exist at hash "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f" in container runtime
	I0717 22:50:56.205877   54573 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:56.205929   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.205962   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.27.3
	I0717 22:50:56.205875   54573 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.27.3" needs transfer: "registry.k8s.io/kube-proxy:v1.27.3" does not exist at hash "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c" in container runtime
	I0717 22:50:56.206017   54573 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:56.206039   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.230299   54573 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 22:50:56.230358   54573 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:56.230406   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:50:56.230508   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0717 22:50:56.230526   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.7-0
	I0717 22:50:56.230585   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.27.3
	I0717 22:50:56.230619   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.27.3
	I0717 22:50:56.280737   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.27.3
	I0717 22:50:56.280740   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3
	I0717 22:50:56.280876   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 22:50:56.346096   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0
	I0717 22:50:56.346185   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0717 22:50:56.346213   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.7-0
	I0717 22:50:56.346257   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0717 22:50:56.346281   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0717 22:50:56.346325   54573 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:50:56.346360   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3
	I0717 22:50:56.346370   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 22:50:56.346409   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 22:50:56.361471   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3
	I0717 22:50:56.361511   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.27.3 (exists)
	I0717 22:50:56.361546   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 22:50:56.361605   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 22:50:56.361606   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3
	I0717 22:50:56.410058   54573 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 22:50:56.410140   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.27.3 (exists)
	I0717 22:50:56.410177   54573 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:50:56.410222   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0717 22:50:56.410317   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.7-0 (exists)
	I0717 22:50:56.410389   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.27.3 (exists)
	I0717 22:50:53.066719   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:50:53.096978   54248 api_server.go:72] duration metric: took 2.545662837s to wait for apiserver process to appear ...
	I0717 22:50:53.097002   54248 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:50:53.097021   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:57.043968   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:50:57.044010   54248 api_server.go:103] status: https://192.168.61.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:50:57.544722   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:57.550687   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:50:57.550718   54248 api_server.go:103] status: https://192.168.61.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:50:58.045135   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:58.058934   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:50:58.058970   54248 api_server.go:103] status: https://192.168.61.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:50:58.544766   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 22:50:58.550628   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 200:
	ok
	I0717 22:50:58.559879   54248 api_server.go:141] control plane version: v1.27.3
	I0717 22:50:58.559912   54248 api_server.go:131] duration metric: took 5.462902985s to wait for apiserver health ...
	I0717 22:50:58.559925   54248 cni.go:84] Creating CNI manager for ""
	I0717 22:50:58.559936   54248 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:50:58.605706   54248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:50:58.380501   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:58.380825   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:58.380842   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:58.380780   55310 retry.go:31] will retry after 1.23430246s: waiting for machine to come up
	I0717 22:50:59.617145   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:50:59.617808   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:50:59.617841   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:50:59.617730   55310 retry.go:31] will retry after 1.214374623s: waiting for machine to come up
	I0717 22:51:00.834129   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:00.834639   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:00.834680   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:00.834594   55310 retry.go:31] will retry after 1.950432239s: waiting for machine to come up
	I0717 22:50:58.680414   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.27.3: (2.318705948s)
	I0717 22:50:58.680448   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.27.3 from cache
	I0717 22:50:58.680485   54573 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.27.3: (2.318846109s)
	I0717 22:50:58.680525   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.27.3 (exists)
	I0717 22:50:58.680548   54573 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.270351678s)
	I0717 22:50:58.680595   54573 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 22:50:58.680614   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 22:50:58.680674   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3
	I0717 22:51:01.356090   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.27.3: (2.675377242s)
	I0717 22:51:01.356124   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.27.3 from cache
	I0717 22:51:01.356174   54573 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0717 22:51:01.356232   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0717 22:50:58.607184   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:50:58.656720   54248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:50:58.740705   54248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:50:58.760487   54248 system_pods.go:59] 8 kube-system pods found
	I0717 22:50:58.760530   54248 system_pods.go:61] "coredns-5d78c9869d-pwd8q" [f8079ab4-1d34-4847-bdb9-7d0a500ed732] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:50:58.760542   54248 system_pods.go:61] "etcd-embed-certs-571296" [e2a4f2bb-a767-484f-9339-7024168bb59d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:50:58.760553   54248 system_pods.go:61] "kube-apiserver-embed-certs-571296" [313d49ba-2814-49e7-8b97-9c278fd33686] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:50:58.760600   54248 system_pods.go:61] "kube-controller-manager-embed-certs-571296" [03ede9e6-f06a-45a2-bafc-0ae24db96be8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:50:58.760720   54248 system_pods.go:61] "kube-proxy-kpt5d" [109fb9ce-61ab-46b0-aaf8-478d61c16fe9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:50:58.760754   54248 system_pods.go:61] "kube-scheduler-embed-certs-571296" [a10941b1-ac81-4224-bc9e-89228ad3d5c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:50:58.760765   54248 system_pods.go:61] "metrics-server-74d5c6b9c-jl7jl" [251ed989-12c1-49e5-bec1-114c3548c8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:50:58.760784   54248 system_pods.go:61] "storage-provisioner" [fb7f6371-8788-4037-8eaf-6dc2189102ec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 22:50:58.760795   54248 system_pods.go:74] duration metric: took 20.068616ms to wait for pod list to return data ...
	I0717 22:50:58.760807   54248 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:50:58.777293   54248 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:50:58.777328   54248 node_conditions.go:123] node cpu capacity is 2
	I0717 22:50:58.777343   54248 node_conditions.go:105] duration metric: took 16.528777ms to run NodePressure ...
	I0717 22:50:58.777364   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:50:59.270627   54248 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:50:59.277045   54248 kubeadm.go:787] kubelet initialised
	I0717 22:50:59.277074   54248 kubeadm.go:788] duration metric: took 6.413321ms waiting for restarted kubelet to initialise ...
	I0717 22:50:59.277083   54248 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:50:59.285338   54248 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:01.304495   54248 pod_ready.go:102] pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:02.787568   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:02.788090   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:02.788118   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:02.788031   55310 retry.go:31] will retry after 2.897894179s: waiting for machine to come up
	I0717 22:51:05.687387   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:05.687774   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:05.687816   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:05.687724   55310 retry.go:31] will retry after 3.029953032s: waiting for machine to come up
	I0717 22:51:02.822684   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.466424442s)
	I0717 22:51:02.822717   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0717 22:51:02.822741   54573 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.7-0
	I0717 22:51:02.822790   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0
	I0717 22:51:03.306481   54248 pod_ready.go:102] pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:04.302530   54248 pod_ready.go:92] pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:04.302560   54248 pod_ready.go:81] duration metric: took 5.01718551s waiting for pod "coredns-5d78c9869d-pwd8q" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:04.302573   54248 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:06.320075   54248 pod_ready.go:102] pod "etcd-embed-certs-571296" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:08.719593   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:08.720084   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | unable to find current IP address of domain default-k8s-diff-port-504828 in network mk-default-k8s-diff-port-504828
	I0717 22:51:08.720116   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | I0717 22:51:08.720015   55310 retry.go:31] will retry after 3.646843477s: waiting for machine to come up
	I0717 22:51:12.370696   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.371189   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Found IP for machine: 192.168.72.118
	I0717 22:51:12.371225   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has current primary IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.371237   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Reserving static IP address...
	I0717 22:51:12.371698   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-504828", mac: "52:54:00:28:6f:f7", ip: "192.168.72.118"} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.371729   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Reserved static IP address: 192.168.72.118
	I0717 22:51:12.371747   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | skip adding static IP to network mk-default-k8s-diff-port-504828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-504828", mac: "52:54:00:28:6f:f7", ip: "192.168.72.118"}
	I0717 22:51:12.371759   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Waiting for SSH to be available...
	I0717 22:51:12.371774   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Getting to WaitForSSH function...
	I0717 22:51:12.374416   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.374804   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.374839   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.374958   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Using SSH client type: external
	I0717 22:51:12.375000   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa (-rw-------)
	I0717 22:51:12.375056   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:51:12.375078   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | About to run SSH command:
	I0717 22:51:12.375103   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | exit 0
	I0717 22:51:12.461844   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | SSH cmd err, output: <nil>: 
	I0717 22:51:12.462190   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetConfigRaw
	I0717 22:51:12.462878   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:12.465698   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.466129   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.466171   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.466432   54649 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/config.json ...
	I0717 22:51:12.466686   54649 machine.go:88] provisioning docker machine ...
	I0717 22:51:12.466713   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:12.466932   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetMachineName
	I0717 22:51:12.467149   54649 buildroot.go:166] provisioning hostname "default-k8s-diff-port-504828"
	I0717 22:51:12.467174   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetMachineName
	I0717 22:51:12.467336   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.469892   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.470309   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.470347   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.470539   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:12.470711   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.470906   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.471075   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:12.471251   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:12.471709   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:12.471728   54649 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-504828 && echo "default-k8s-diff-port-504828" | sudo tee /etc/hostname
	I0717 22:51:10.226119   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.7-0: (7.403300342s)
	I0717 22:51:10.226147   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.7-0 from cache
	I0717 22:51:10.226176   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 22:51:10.226231   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3
	I0717 22:51:12.580664   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.27.3: (2.354394197s)
	I0717 22:51:12.580698   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.27.3 from cache
	I0717 22:51:12.580729   54573 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 22:51:12.580786   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3
	I0717 22:51:08.320182   54248 pod_ready.go:92] pod "etcd-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:08.320212   54248 pod_ready.go:81] duration metric: took 4.017631268s waiting for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:08.320225   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:08.327865   54248 pod_ready.go:92] pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:08.327901   54248 pod_ready.go:81] duration metric: took 7.613771ms waiting for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:08.327916   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:10.343489   54248 pod_ready.go:102] pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:11.344309   54248 pod_ready.go:92] pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:11.344328   54248 pod_ready.go:81] duration metric: took 3.016404448s waiting for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.344338   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kpt5d" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.353150   54248 pod_ready.go:92] pod "kube-proxy-kpt5d" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:11.353174   54248 pod_ready.go:81] duration metric: took 8.829647ms waiting for pod "kube-proxy-kpt5d" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.353183   54248 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.360223   54248 pod_ready.go:92] pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:11.360242   54248 pod_ready.go:81] duration metric: took 7.0537ms waiting for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:11.360251   54248 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:13.630627   53870 start.go:369] acquired machines lock for "old-k8s-version-332820" in 58.214644858s
	I0717 22:51:13.630698   53870 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:51:13.630705   53870 fix.go:54] fixHost starting: 
	I0717 22:51:13.631117   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:13.631153   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:13.651676   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38349
	I0717 22:51:13.652152   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:13.652820   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:51:13.652841   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:13.653180   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:13.653679   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:13.653832   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:51:13.656911   53870 fix.go:102] recreateIfNeeded on old-k8s-version-332820: state=Stopped err=<nil>
	I0717 22:51:13.656944   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	W0717 22:51:13.657151   53870 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:51:13.659194   53870 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-332820" ...
	I0717 22:51:12.607198   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504828
	
	I0717 22:51:12.607256   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.610564   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.611073   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.611139   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.611470   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:12.611707   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.611918   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.612080   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:12.612267   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:12.612863   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:12.612897   54649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-504828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-504828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-504828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:51:12.749133   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:51:12.749159   54649 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:51:12.749187   54649 buildroot.go:174] setting up certificates
	I0717 22:51:12.749198   54649 provision.go:83] configureAuth start
	I0717 22:51:12.749211   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetMachineName
	I0717 22:51:12.749475   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:12.752199   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.752608   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.752637   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.752753   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.754758   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.755095   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.755142   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.755255   54649 provision.go:138] copyHostCerts
	I0717 22:51:12.755313   54649 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:51:12.755328   54649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:51:12.755393   54649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:51:12.755503   54649 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:51:12.755516   54649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:51:12.755547   54649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:51:12.755615   54649 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:51:12.755626   54649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:51:12.755649   54649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:51:12.755708   54649 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-504828 san=[192.168.72.118 192.168.72.118 localhost 127.0.0.1 minikube default-k8s-diff-port-504828]
	I0717 22:51:12.865920   54649 provision.go:172] copyRemoteCerts
	I0717 22:51:12.865978   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:51:12.865998   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:12.868784   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.869162   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:12.869196   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:12.869354   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:12.869551   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:12.869731   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:12.869864   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:12.963734   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:51:12.988925   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 22:51:13.014007   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:51:13.037974   54649 provision.go:86] duration metric: configureAuth took 288.764872ms
	I0717 22:51:13.038002   54649 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:51:13.038226   54649 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:51:13.038298   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.041038   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.041510   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.041560   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.041722   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.041928   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.042115   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.042265   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.042462   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:13.042862   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:13.042883   54649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:51:13.359789   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:51:13.359856   54649 machine.go:91] provisioned docker machine in 893.152202ms
	I0717 22:51:13.359873   54649 start.go:300] post-start starting for "default-k8s-diff-port-504828" (driver="kvm2")
	I0717 22:51:13.359885   54649 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:51:13.359909   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.360286   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:51:13.360322   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.363265   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.363637   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.363668   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.363953   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.364165   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.364336   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.364484   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:13.456030   54649 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:51:13.460504   54649 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:51:13.460539   54649 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:51:13.460610   54649 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:51:13.460711   54649 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:51:13.460824   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:51:13.469442   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:13.497122   54649 start.go:303] post-start completed in 137.230872ms
	I0717 22:51:13.497150   54649 fix.go:56] fixHost completed within 19.269364226s
	I0717 22:51:13.497196   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.500248   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.500673   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.500721   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.500872   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.501093   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.501256   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.501434   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.501602   54649 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:13.502063   54649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0717 22:51:13.502080   54649 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:51:13.630454   54649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634273.570672552
	
	I0717 22:51:13.630476   54649 fix.go:206] guest clock: 1689634273.570672552
	I0717 22:51:13.630486   54649 fix.go:219] Guest: 2023-07-17 22:51:13.570672552 +0000 UTC Remote: 2023-07-17 22:51:13.49715425 +0000 UTC m=+216.001835933 (delta=73.518302ms)
	I0717 22:51:13.630534   54649 fix.go:190] guest clock delta is within tolerance: 73.518302ms
	I0717 22:51:13.630541   54649 start.go:83] releasing machines lock for "default-k8s-diff-port-504828", held for 19.402800296s
	I0717 22:51:13.630571   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.630804   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:13.633831   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.634285   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.634329   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.634496   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.635108   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.635324   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:51:13.635440   54649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:51:13.635513   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.635563   54649 ssh_runner.go:195] Run: cat /version.json
	I0717 22:51:13.635590   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:51:13.638872   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639085   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639277   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.639313   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639513   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.639711   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.639730   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:13.639769   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:13.639930   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:51:13.639966   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.640133   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:13.640149   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:51:13.640293   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:51:13.640432   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:51:13.732117   54649 ssh_runner.go:195] Run: systemctl --version
	I0717 22:51:13.762073   54649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:51:13.920611   54649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:51:13.927492   54649 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:51:13.927552   54649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:51:13.943359   54649 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:51:13.943384   54649 start.go:466] detecting cgroup driver to use...
	I0717 22:51:13.943456   54649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:51:13.959123   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:51:13.974812   54649 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:51:13.974875   54649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:51:13.991292   54649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:51:14.006999   54649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:51:14.116763   54649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:51:14.286675   54649 docker.go:212] disabling docker service ...
	I0717 22:51:14.286747   54649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:51:14.304879   54649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:51:14.319280   54649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:51:14.436994   54649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:51:14.551392   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:51:14.564944   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:51:14.588553   54649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:51:14.588618   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.602482   54649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:51:14.602561   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.613901   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.624520   54649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:14.634941   54649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:51:14.649124   54649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:51:14.659103   54649 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:51:14.659194   54649 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:51:14.673064   54649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:51:14.684547   54649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:51:14.796698   54649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:51:15.013266   54649 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:51:15.013352   54649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:51:15.019638   54649 start.go:534] Will wait 60s for crictl version
	I0717 22:51:15.019707   54649 ssh_runner.go:195] Run: which crictl
	I0717 22:51:15.023691   54649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:51:15.079550   54649 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:51:15.079642   54649 ssh_runner.go:195] Run: crio --version
	I0717 22:51:15.149137   54649 ssh_runner.go:195] Run: crio --version
	I0717 22:51:15.210171   54649 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 22:51:15.211641   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetIP
	I0717 22:51:15.214746   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:15.215160   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:51:15.215195   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:51:15.215444   54649 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 22:51:15.220209   54649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:15.233265   54649 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:51:15.233336   54649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:15.278849   54649 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 22:51:15.278928   54649 ssh_runner.go:195] Run: which lz4
	I0717 22:51:15.284618   54649 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:51:15.289979   54649 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:51:15.290021   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 22:51:17.240790   54649 crio.go:444] Took 1.956220 seconds to copy over tarball
	I0717 22:51:17.240850   54649 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:51:14.577167   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.27.3: (1.996354374s)
	I0717 22:51:14.577200   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.27.3 from cache
	I0717 22:51:14.577239   54573 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:51:14.577288   54573 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 22:51:15.749388   54573 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.172071962s)
	I0717 22:51:15.749419   54573 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 22:51:15.749442   54573 cache_images.go:123] Successfully loaded all cached images
	I0717 22:51:15.749448   54573 cache_images.go:92] LoadImages completed in 19.962118423s
	I0717 22:51:15.749548   54573 ssh_runner.go:195] Run: crio config
	I0717 22:51:15.830341   54573 cni.go:84] Creating CNI manager for ""
	I0717 22:51:15.830380   54573 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:15.830394   54573 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:51:15.830416   54573 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-935524 NodeName:no-preload-935524 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:51:15.830609   54573 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-935524"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:51:15.830710   54573 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-935524 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:no-preload-935524 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:51:15.830777   54573 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:51:15.844785   54573 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:51:15.844854   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:51:15.859135   54573 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0717 22:51:15.884350   54573 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:51:15.904410   54573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0717 22:51:15.930959   54573 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0717 22:51:15.937680   54573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:15.960124   54573 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524 for IP: 192.168.39.6
	I0717 22:51:15.960169   54573 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:15.960352   54573 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:51:15.960416   54573 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:51:15.960539   54573 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.key
	I0717 22:51:15.960635   54573 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/apiserver.key.cc3bd7a5
	I0717 22:51:15.960694   54573 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/proxy-client.key
	I0717 22:51:15.960842   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:51:15.960882   54573 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:51:15.960899   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:51:15.960936   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:51:15.960973   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:51:15.961001   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:51:15.961063   54573 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:15.961864   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:51:16.000246   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:51:16.036739   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:51:16.073916   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:51:16.110871   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:51:16.147671   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:51:16.183503   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:51:16.216441   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:51:16.251053   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:51:16.291022   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:51:16.327764   54573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:51:16.360870   54573 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:51:16.399760   54573 ssh_runner.go:195] Run: openssl version
	I0717 22:51:16.407720   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:51:16.423038   54573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:16.430870   54573 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:16.430933   54573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:16.441206   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:51:16.455708   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:51:16.470036   54573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:51:16.477133   54573 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:51:16.477206   54573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:51:16.485309   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:51:16.503973   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:51:16.524430   54573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:51:16.533991   54573 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:51:16.534052   54573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:51:16.544688   54573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:51:16.563847   54573 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:51:16.572122   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:51:16.583217   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:51:16.594130   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:51:16.606268   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:51:16.618166   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:51:16.628424   54573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:51:16.636407   54573 kubeadm.go:404] StartCluster: {Name:no-preload-935524 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-935524 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:51:16.636531   54573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:51:16.636616   54573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:16.677023   54573 cri.go:89] found id: ""
	I0717 22:51:16.677096   54573 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:51:16.691214   54573 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:51:16.691243   54573 kubeadm.go:636] restartCluster start
	I0717 22:51:16.691309   54573 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:51:16.705358   54573 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:16.707061   54573 kubeconfig.go:92] found "no-preload-935524" server: "https://192.168.39.6:8443"
	I0717 22:51:16.710828   54573 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:51:16.722187   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:16.722262   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:16.739474   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:17.240340   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:17.240432   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:17.255528   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:13.660641   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Start
	I0717 22:51:13.660899   53870 main.go:141] libmachine: (old-k8s-version-332820) Ensuring networks are active...
	I0717 22:51:13.661724   53870 main.go:141] libmachine: (old-k8s-version-332820) Ensuring network default is active
	I0717 22:51:13.662114   53870 main.go:141] libmachine: (old-k8s-version-332820) Ensuring network mk-old-k8s-version-332820 is active
	I0717 22:51:13.662588   53870 main.go:141] libmachine: (old-k8s-version-332820) Getting domain xml...
	I0717 22:51:13.663907   53870 main.go:141] libmachine: (old-k8s-version-332820) Creating domain...
	I0717 22:51:14.067159   53870 main.go:141] libmachine: (old-k8s-version-332820) Waiting to get IP...
	I0717 22:51:14.067897   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.068328   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.068398   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.068321   55454 retry.go:31] will retry after 239.1687ms: waiting for machine to come up
	I0717 22:51:14.309022   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.309748   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.309782   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.309696   55454 retry.go:31] will retry after 256.356399ms: waiting for machine to come up
	I0717 22:51:14.568103   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.568537   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.568572   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.568490   55454 retry.go:31] will retry after 386.257739ms: waiting for machine to come up
	I0717 22:51:14.955922   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:14.956518   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:14.956548   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:14.956458   55454 retry.go:31] will retry after 410.490408ms: waiting for machine to come up
	I0717 22:51:15.368904   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:15.369672   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:15.369780   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:15.369722   55454 retry.go:31] will retry after 536.865068ms: waiting for machine to come up
	I0717 22:51:15.908301   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:15.908814   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:15.908851   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:15.908774   55454 retry.go:31] will retry after 863.22272ms: waiting for machine to come up
	I0717 22:51:16.773413   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:16.773936   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:16.773971   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:16.773877   55454 retry.go:31] will retry after 858.793193ms: waiting for machine to come up
	I0717 22:51:17.634087   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:17.634588   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:17.634613   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:17.634532   55454 retry.go:31] will retry after 1.416659037s: waiting for machine to come up
	I0717 22:51:13.375358   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:15.393985   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:17.887365   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:20.250749   54649 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.009864781s)
	I0717 22:51:20.250783   54649 crio.go:451] Took 3.009971 seconds to extract the tarball
	I0717 22:51:20.250793   54649 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:51:20.291666   54649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:20.341098   54649 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:51:20.341126   54649 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:51:20.341196   54649 ssh_runner.go:195] Run: crio config
	I0717 22:51:20.415138   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:51:20.415161   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:20.415171   54649 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:51:20.415185   54649 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.118 APIServerPort:8444 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-504828 NodeName:default-k8s-diff-port-504828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:51:20.415352   54649 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.118
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-504828"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:51:20.415432   54649 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-504828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-504828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0717 22:51:20.415488   54649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:51:20.427702   54649 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:51:20.427758   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:51:20.436950   54649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0717 22:51:20.454346   54649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:51:20.470679   54649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0717 22:51:20.491725   54649 ssh_runner.go:195] Run: grep 192.168.72.118	control-plane.minikube.internal$ /etc/hosts
	I0717 22:51:20.495952   54649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:20.511714   54649 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828 for IP: 192.168.72.118
	I0717 22:51:20.511768   54649 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:20.511949   54649 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:51:20.511997   54649 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:51:20.512100   54649 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.key
	I0717 22:51:20.512210   54649 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/apiserver.key.f316a5ec
	I0717 22:51:20.512293   54649 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/proxy-client.key
	I0717 22:51:20.512432   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:51:20.512474   54649 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:51:20.512490   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:51:20.512526   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:51:20.512563   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:51:20.512597   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:51:20.512654   54649 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:20.513217   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:51:20.543975   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:51:20.573149   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:51:20.603536   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:51:20.632387   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:51:20.658524   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:51:20.685636   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:51:20.715849   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:51:20.746544   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:51:20.773588   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:51:20.798921   54649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:51:20.826004   54649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:51:20.843941   54649 ssh_runner.go:195] Run: openssl version
	I0717 22:51:20.849904   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:51:20.860510   54649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:51:20.865435   54649 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:51:20.865499   54649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:51:20.872493   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:51:20.883044   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:51:20.893448   54649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:20.898872   54649 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:20.898937   54649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:20.905231   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:51:20.915267   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:51:20.925267   54649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:51:20.929988   54649 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:51:20.930055   54649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:51:20.935935   54649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:51:20.945567   54649 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:51:20.950083   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:51:20.956164   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:51:20.962921   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:51:20.969329   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:51:20.975672   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:51:20.981532   54649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:51:20.987431   54649 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-504828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port
-504828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:51:20.987551   54649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:51:20.987640   54649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:21.020184   54649 cri.go:89] found id: ""
	I0717 22:51:21.020272   54649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:51:21.030407   54649 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:51:21.030426   54649 kubeadm.go:636] restartCluster start
	I0717 22:51:21.030484   54649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:51:21.039171   54649 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.040133   54649 kubeconfig.go:92] found "default-k8s-diff-port-504828" server: "https://192.168.72.118:8444"
	I0717 22:51:21.043010   54649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:51:21.052032   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.052083   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.063718   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.564403   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.564474   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.576250   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:22.063846   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.063915   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.077908   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:17.739595   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:17.739675   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:17.754882   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:18.240006   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:18.240109   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:18.253391   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:18.739658   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:18.739750   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:18.751666   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:19.240285   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:19.240385   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:19.254816   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:19.740338   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:19.740430   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:19.757899   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:20.240481   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:20.240561   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:20.255605   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:20.739950   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:20.740064   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:20.754552   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.240009   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.240088   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.252127   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:21.739671   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:21.739761   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:21.751590   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:22.239795   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.239895   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.255489   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:19.053039   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:19.053552   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:19.053577   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:19.053545   55454 retry.go:31] will retry after 1.844468395s: waiting for machine to come up
	I0717 22:51:20.899373   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:20.899955   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:20.899985   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:20.899907   55454 retry.go:31] will retry after 1.689590414s: waiting for machine to come up
	I0717 22:51:22.590651   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:22.591178   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:22.591210   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:22.591133   55454 retry.go:31] will retry after 2.006187847s: waiting for machine to come up
	I0717 22:51:20.375100   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:22.375448   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:22.564646   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.564758   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.578416   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.063819   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.063917   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.076239   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.563771   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.563906   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.577184   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.064855   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.064943   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.080926   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.563906   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.564002   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.580421   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.063993   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.064078   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.076570   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.563894   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.563978   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.575475   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.063959   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:26.064042   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:26.075498   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.564007   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:26.564068   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:26.576760   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:27.064334   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:27.064437   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:27.076567   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:22.739773   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:22.739859   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:22.752462   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.240402   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.240481   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.255896   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:23.740550   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:23.740740   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:23.756364   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.239721   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.239803   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.251755   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:24.740355   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:24.740455   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:24.751880   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.240545   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.240637   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.252165   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:25.739649   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:25.739729   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:25.751302   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.239861   54573 api_server.go:166] Checking apiserver status ...
	I0717 22:51:26.239951   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:26.251854   54573 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:26.722721   54573 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:51:26.722761   54573 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:51:26.722774   54573 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:51:26.722824   54573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:26.754496   54573 cri.go:89] found id: ""
	I0717 22:51:26.754575   54573 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:51:26.769858   54573 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:51:26.778403   54573 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:51:26.778456   54573 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:26.788782   54573 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:26.788809   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:26.926114   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:24.598549   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:24.599047   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:24.599078   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:24.598993   55454 retry.go:31] will retry after 2.77055632s: waiting for machine to come up
	I0717 22:51:27.371775   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:27.372248   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | unable to find current IP address of domain old-k8s-version-332820 in network mk-old-k8s-version-332820
	I0717 22:51:27.372282   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | I0717 22:51:27.372196   55454 retry.go:31] will retry after 3.942088727s: waiting for machine to come up
	I0717 22:51:24.876056   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:26.876873   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:27.564363   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:27.564459   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:27.578222   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:28.063778   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:28.063883   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:28.075427   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:28.564630   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:28.564717   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:28.576903   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:29.064502   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:29.064605   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:29.075995   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:29.564295   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:29.564378   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:29.576762   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:30.063786   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:30.063870   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:30.079670   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:30.564137   54649 api_server.go:166] Checking apiserver status ...
	I0717 22:51:30.564246   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:30.579055   54649 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:31.052972   54649 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:51:31.053010   54649 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:51:31.053022   54649 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:51:31.053071   54649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:31.087580   54649 cri.go:89] found id: ""
	I0717 22:51:31.087681   54649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:51:31.103788   54649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:51:31.113570   54649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:51:31.113630   54649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:31.122993   54649 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:31.123016   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:31.254859   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:32.122277   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:32.360183   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:32.499924   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.181412   54573 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.255240525s)
	I0717 22:51:28.181446   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.398026   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.491028   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:28.586346   54573 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:51:28.586450   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:29.099979   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:29.599755   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:30.100095   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:30.600338   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:31.100205   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:31.129978   54573 api_server.go:72] duration metric: took 2.543631809s to wait for apiserver process to appear ...
	I0717 22:51:31.130004   54573 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:51:31.130020   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:31.316328   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.316892   53870 main.go:141] libmachine: (old-k8s-version-332820) Found IP for machine: 192.168.50.149
	I0717 22:51:31.316924   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has current primary IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.316936   53870 main.go:141] libmachine: (old-k8s-version-332820) Reserving static IP address...
	I0717 22:51:31.317425   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "old-k8s-version-332820", mac: "52:54:00:46:ca:1a", ip: "192.168.50.149"} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.317463   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | skip adding static IP to network mk-old-k8s-version-332820 - found existing host DHCP lease matching {name: "old-k8s-version-332820", mac: "52:54:00:46:ca:1a", ip: "192.168.50.149"}
	I0717 22:51:31.317486   53870 main.go:141] libmachine: (old-k8s-version-332820) Reserved static IP address: 192.168.50.149
	I0717 22:51:31.317503   53870 main.go:141] libmachine: (old-k8s-version-332820) Waiting for SSH to be available...
	I0717 22:51:31.317531   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Getting to WaitForSSH function...
	I0717 22:51:31.320209   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.320558   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.320593   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.320779   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Using SSH client type: external
	I0717 22:51:31.320810   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa (-rw-------)
	I0717 22:51:31.320862   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 22:51:31.320881   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | About to run SSH command:
	I0717 22:51:31.320895   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | exit 0
	I0717 22:51:31.426263   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | SSH cmd err, output: <nil>: 
	I0717 22:51:31.426659   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetConfigRaw
	I0717 22:51:31.427329   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:31.430330   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.430697   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.430739   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.431053   53870 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/config.json ...
	I0717 22:51:31.431288   53870 machine.go:88] provisioning docker machine ...
	I0717 22:51:31.431312   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:31.431531   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetMachineName
	I0717 22:51:31.431711   53870 buildroot.go:166] provisioning hostname "old-k8s-version-332820"
	I0717 22:51:31.431736   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetMachineName
	I0717 22:51:31.431959   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.434616   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.435073   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.435105   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.435246   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:31.435429   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.435578   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.435720   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:31.435889   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:31.436476   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:31.436499   53870 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-332820 && echo "old-k8s-version-332820" | sudo tee /etc/hostname
	I0717 22:51:31.589302   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-332820
	
	I0717 22:51:31.589343   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.592724   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.593180   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.593236   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.593559   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:31.593754   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.593922   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.594077   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:31.594266   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:31.594671   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:31.594696   53870 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-332820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-332820/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-332820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:51:31.746218   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:51:31.746250   53870 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 22:51:31.746274   53870 buildroot.go:174] setting up certificates
	I0717 22:51:31.746298   53870 provision.go:83] configureAuth start
	I0717 22:51:31.746316   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetMachineName
	I0717 22:51:31.746626   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:31.750130   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.750678   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.750724   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.750781   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.753170   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.753495   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.753552   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.753654   53870 provision.go:138] copyHostCerts
	I0717 22:51:31.753715   53870 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 22:51:31.753728   53870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 22:51:31.753804   53870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 22:51:31.753944   53870 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 22:51:31.753957   53870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 22:51:31.753989   53870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 22:51:31.754072   53870 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 22:51:31.754085   53870 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 22:51:31.754113   53870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 22:51:31.754184   53870 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-332820 san=[192.168.50.149 192.168.50.149 localhost 127.0.0.1 minikube old-k8s-version-332820]
	I0717 22:51:31.847147   53870 provision.go:172] copyRemoteCerts
	I0717 22:51:31.847203   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:51:31.847225   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:31.850322   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.850753   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:31.850810   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:31.851095   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:31.851414   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:31.851605   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:31.851784   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:31.951319   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:51:31.980515   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:51:32.010536   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 22:51:32.037399   53870 provision.go:86] duration metric: configureAuth took 291.082125ms
	I0717 22:51:32.037434   53870 buildroot.go:189] setting minikube options for container-runtime
	I0717 22:51:32.037660   53870 config.go:182] Loaded profile config "old-k8s-version-332820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 22:51:32.037735   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.040863   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.041427   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.041534   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.041625   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.041848   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.042053   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.042225   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.042394   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:32.042812   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:32.042834   53870 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:51:32.425577   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:51:32.425603   53870 machine.go:91] provisioned docker machine in 994.299178ms
	I0717 22:51:32.425615   53870 start.go:300] post-start starting for "old-k8s-version-332820" (driver="kvm2")
	I0717 22:51:32.425627   53870 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:51:32.425662   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.426023   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:51:32.426060   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.429590   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.430060   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.430087   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.430464   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.430677   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.430839   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.430955   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:32.535625   53870 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:51:32.541510   53870 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 22:51:32.541569   53870 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 22:51:32.541660   53870 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 22:51:32.541771   53870 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 22:51:32.541919   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:51:32.554113   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:32.579574   53870 start.go:303] post-start completed in 153.943669ms
	I0717 22:51:32.579597   53870 fix.go:56] fixHost completed within 18.948892402s
	I0717 22:51:32.579620   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.582411   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.582774   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.582807   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.582939   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.583181   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.583404   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.583562   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.583804   53870 main.go:141] libmachine: Using SSH client type: native
	I0717 22:51:32.584270   53870 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.149 22 <nil> <nil>}
	I0717 22:51:32.584287   53870 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 22:51:32.727134   53870 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689634292.668672695
	
	I0717 22:51:32.727160   53870 fix.go:206] guest clock: 1689634292.668672695
	I0717 22:51:32.727171   53870 fix.go:219] Guest: 2023-07-17 22:51:32.668672695 +0000 UTC Remote: 2023-07-17 22:51:32.579600815 +0000 UTC m=+359.756107714 (delta=89.07188ms)
	I0717 22:51:32.727195   53870 fix.go:190] guest clock delta is within tolerance: 89.07188ms
	I0717 22:51:32.727201   53870 start.go:83] releasing machines lock for "old-k8s-version-332820", held for 19.096529597s
	I0717 22:51:32.727223   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.727539   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:32.730521   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.730926   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.730958   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.731115   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.731706   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.731881   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:51:32.731968   53870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:51:32.732018   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.732115   53870 ssh_runner.go:195] Run: cat /version.json
	I0717 22:51:32.732141   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:51:32.734864   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735214   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.735264   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735284   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735387   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.735561   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.735821   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:32.735832   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.735852   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:32.735958   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:32.736097   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:51:32.736224   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:51:32.736329   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:51:32.736435   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:51:32.854136   53870 ssh_runner.go:195] Run: systemctl --version
	I0717 22:51:29.375082   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:31.376747   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:32.860997   53870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:51:33.025325   53870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 22:51:33.031587   53870 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 22:51:33.031662   53870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:51:33.046431   53870 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:51:33.046454   53870 start.go:466] detecting cgroup driver to use...
	I0717 22:51:33.046520   53870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:51:33.067265   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:51:33.079490   53870 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:51:33.079543   53870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:51:33.093639   53870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:51:33.106664   53870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:51:33.248823   53870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:51:33.414350   53870 docker.go:212] disabling docker service ...
	I0717 22:51:33.414420   53870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:51:33.428674   53870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:51:33.442140   53870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:51:33.564890   53870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:51:33.699890   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:51:33.714011   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:51:33.733726   53870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0717 22:51:33.733825   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.746603   53870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:51:33.746676   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.759291   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.772841   53870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:51:33.785507   53870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:51:33.798349   53870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:51:33.807468   53870 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 22:51:33.807578   53870 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 22:51:33.822587   53870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:51:33.832542   53870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:51:33.975008   53870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:51:34.192967   53870 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:51:34.193041   53870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:51:34.200128   53870 start.go:534] Will wait 60s for crictl version
	I0717 22:51:34.200194   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:34.204913   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:51:34.243900   53870 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 22:51:34.244054   53870 ssh_runner.go:195] Run: crio --version
	I0717 22:51:34.300151   53870 ssh_runner.go:195] Run: crio --version
	I0717 22:51:34.365344   53870 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0717 22:51:35.258235   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:51:35.258266   54573 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:51:35.758740   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:35.767634   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:35.767669   54573 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:36.259368   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:36.269761   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:36.269804   54573 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:36.759179   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:51:36.767717   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0717 22:51:36.783171   54573 api_server.go:141] control plane version: v1.27.3
	I0717 22:51:36.783277   54573 api_server.go:131] duration metric: took 5.653264463s to wait for apiserver health ...
	I0717 22:51:36.783299   54573 cni.go:84] Creating CNI manager for ""
	I0717 22:51:36.783320   54573 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:36.785787   54573 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:51:32.594699   54649 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:51:32.594791   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:33.112226   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:33.611860   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:34.112071   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:34.611354   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:35.111291   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:35.611869   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:35.637583   54649 api_server.go:72] duration metric: took 3.042882856s to wait for apiserver process to appear ...
	I0717 22:51:35.637607   54649 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:51:35.637624   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:36.787709   54573 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:51:36.808980   54573 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:51:36.862525   54573 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:51:36.878653   54573 system_pods.go:59] 8 kube-system pods found
	I0717 22:51:36.878761   54573 system_pods.go:61] "coredns-5d78c9869d-2mpst" [7516b57f-a4cb-4e2f-995e-8e063bed22ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:51:36.878788   54573 system_pods.go:61] "etcd-no-preload-935524" [b663c4f9-d98e-457d-b511-435bea5e9525] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:51:36.878827   54573 system_pods.go:61] "kube-apiserver-no-preload-935524" [fb6f55fc-8705-46aa-a23c-e7870e52e542] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:51:36.878852   54573 system_pods.go:61] "kube-controller-manager-no-preload-935524" [37d43d22-e857-4a9b-b0cb-a9fc39931baa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:51:36.878874   54573 system_pods.go:61] "kube-proxy-qhp66" [8bc95955-b7ba-41e3-ac67-604a9695f784] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:51:36.878913   54573 system_pods.go:61] "kube-scheduler-no-preload-935524" [86fef4e0-b156-421a-8d53-0d34cae2cdb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:51:36.878940   54573 system_pods.go:61] "metrics-server-74d5c6b9c-tlbpl" [7c478efe-4435-45dd-a688-745872fc2918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:51:36.878959   54573 system_pods.go:61] "storage-provisioner" [85812d54-7a57-430b-991e-e301f123a86a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 22:51:36.878991   54573 system_pods.go:74] duration metric: took 16.439496ms to wait for pod list to return data ...
	I0717 22:51:36.879014   54573 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:51:36.886556   54573 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:51:36.886669   54573 node_conditions.go:123] node cpu capacity is 2
	I0717 22:51:36.886694   54573 node_conditions.go:105] duration metric: took 7.665172ms to run NodePressure ...
	I0717 22:51:36.886743   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:37.408758   54573 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:51:37.426705   54573 kubeadm.go:787] kubelet initialised
	I0717 22:51:37.426750   54573 kubeadm.go:788] duration metric: took 17.898411ms waiting for restarted kubelet to initialise ...
	I0717 22:51:37.426760   54573 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:37.442893   54573 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.449989   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.450020   54573 pod_ready.go:81] duration metric: took 7.096248ms waiting for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.450032   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.450043   54573 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.460343   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "etcd-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.460423   54573 pod_ready.go:81] duration metric: took 10.370601ms waiting for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.460468   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "etcd-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.460481   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.475124   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-apiserver-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.475203   54573 pod_ready.go:81] duration metric: took 14.713192ms waiting for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.475224   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-apiserver-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.475242   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:37.486443   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.486529   54573 pod_ready.go:81] duration metric: took 11.253247ms waiting for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.486551   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.486570   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:34.367014   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetIP
	I0717 22:51:34.370717   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:34.371243   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:51:34.371272   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:51:34.371626   53870 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 22:51:34.380223   53870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:34.395496   53870 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 22:51:34.395564   53870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:34.440412   53870 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 22:51:34.440486   53870 ssh_runner.go:195] Run: which lz4
	I0717 22:51:34.445702   53870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:51:34.451213   53870 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:51:34.451259   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0717 22:51:36.330808   53870 crio.go:444] Took 1.885143 seconds to copy over tarball
	I0717 22:51:36.330866   53870 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:51:33.377108   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:35.379770   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:37.382141   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:37.819308   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-proxy-qhp66" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.819393   54573 pod_ready.go:81] duration metric: took 332.789076ms waiting for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:37.819414   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-proxy-qhp66" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:37.819430   54573 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:38.213914   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "kube-scheduler-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.213947   54573 pod_ready.go:81] duration metric: took 394.500573ms waiting for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:38.213957   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "kube-scheduler-no-preload-935524" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.213967   54573 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:38.617826   54573 pod_ready.go:97] node "no-preload-935524" hosting pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.617855   54573 pod_ready.go:81] duration metric: took 403.88033ms waiting for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	E0717 22:51:38.617867   54573 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-935524" hosting pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:38.617878   54573 pod_ready.go:38] duration metric: took 1.191105641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:38.617907   54573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:51:38.634486   54573 ops.go:34] apiserver oom_adj: -16
	I0717 22:51:38.634511   54573 kubeadm.go:640] restartCluster took 21.94326064s
	I0717 22:51:38.634520   54573 kubeadm.go:406] StartCluster complete in 21.998122781s
	I0717 22:51:38.634560   54573 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:38.634648   54573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:51:38.637414   54573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:38.637733   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:51:38.637868   54573 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:51:38.637955   54573 addons.go:69] Setting storage-provisioner=true in profile "no-preload-935524"
	I0717 22:51:38.637972   54573 addons.go:231] Setting addon storage-provisioner=true in "no-preload-935524"
	W0717 22:51:38.637986   54573 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:51:38.638036   54573 host.go:66] Checking if "no-preload-935524" exists ...
	I0717 22:51:38.638418   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.638441   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.638510   54573 addons.go:69] Setting default-storageclass=true in profile "no-preload-935524"
	I0717 22:51:38.638530   54573 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-935524"
	I0717 22:51:38.638684   54573 addons.go:69] Setting metrics-server=true in profile "no-preload-935524"
	I0717 22:51:38.638700   54573 addons.go:231] Setting addon metrics-server=true in "no-preload-935524"
	W0717 22:51:38.638707   54573 addons.go:240] addon metrics-server should already be in state true
	I0717 22:51:38.638751   54573 host.go:66] Checking if "no-preload-935524" exists ...
	I0717 22:51:38.638977   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.639016   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.639083   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.639106   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.644028   54573 config.go:182] Loaded profile config "no-preload-935524": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:51:38.656131   54573 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-935524" context rescaled to 1 replicas
	I0717 22:51:38.656182   54573 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:51:38.658128   54573 out.go:177] * Verifying Kubernetes components...
	I0717 22:51:38.659350   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I0717 22:51:38.662767   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:51:38.660678   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46603
	I0717 22:51:38.663403   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.664191   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.664207   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.664296   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46321
	I0717 22:51:38.664660   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.664872   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.665287   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.665301   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.665363   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.666826   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.667345   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.667411   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.667432   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.667875   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.667888   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.669299   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.669907   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.669941   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.689870   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0717 22:51:38.690029   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43747
	I0717 22:51:38.690596   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.691039   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.691052   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.691354   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.691782   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.691932   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.691942   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.692153   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.692209   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.692391   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.693179   54573 addons.go:231] Setting addon default-storageclass=true in "no-preload-935524"
	W0717 22:51:38.693197   54573 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:51:38.693226   54573 host.go:66] Checking if "no-preload-935524" exists ...
	I0717 22:51:38.693599   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.693627   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.695740   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:51:38.698283   54573 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:51:38.696822   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:51:38.700282   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:51:38.700294   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:51:38.700313   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:51:38.702588   54573 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:38.704435   54573 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:51:38.704453   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:51:38.704470   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:51:38.704034   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.704509   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:51:38.704545   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.705314   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:51:38.705704   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:51:38.705962   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:51:38.706101   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:51:38.707998   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.708366   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:51:38.708391   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.708663   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:51:38.708827   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:51:38.708935   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:51:38.709039   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:51:38.715303   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0717 22:51:38.715765   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.716225   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.716238   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.716515   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.716900   54573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:51:38.716915   54573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:51:38.775381   54573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0717 22:51:38.781850   54573 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:51:38.782856   54573 main.go:141] libmachine: Using API Version  1
	I0717 22:51:38.782886   54573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:51:38.783335   54573 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:51:38.783547   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetState
	I0717 22:51:38.786539   54573 main.go:141] libmachine: (no-preload-935524) Calling .DriverName
	I0717 22:51:38.786818   54573 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:51:38.786841   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:51:38.786860   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHHostname
	I0717 22:51:38.789639   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.793649   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHPort
	I0717 22:51:38.793678   54573 main.go:141] libmachine: (no-preload-935524) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:7e:aa", ip: ""} in network mk-no-preload-935524: {Iface:virbr3 ExpiryTime:2023-07-17 23:50:44 +0000 UTC Type:0 Mac:52:54:00:dc:7e:aa Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:no-preload-935524 Clientid:01:52:54:00:dc:7e:aa}
	I0717 22:51:38.793701   54573 main.go:141] libmachine: (no-preload-935524) DBG | domain no-preload-935524 has defined IP address 192.168.39.6 and MAC address 52:54:00:dc:7e:aa in network mk-no-preload-935524
	I0717 22:51:38.793926   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHKeyPath
	I0717 22:51:38.794106   54573 main.go:141] libmachine: (no-preload-935524) Calling .GetSSHUsername
	I0717 22:51:38.794262   54573 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/no-preload-935524/id_rsa Username:docker}
	I0717 22:51:38.862651   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:51:38.862675   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:51:38.914260   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:51:38.914294   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:51:38.933208   54573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:51:38.959784   54573 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:51:38.959817   54573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:51:38.977205   54573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:51:39.028067   54573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:51:39.145640   54573 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 22:51:39.145688   54573 node_ready.go:35] waiting up to 6m0s for node "no-preload-935524" to be "Ready" ...
	I0717 22:51:40.593928   54573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.616678929s)
	I0717 22:51:40.593974   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.593987   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.594018   54573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.660755961s)
	I0717 22:51:40.594062   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.594078   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.594360   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.594377   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.594388   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.594397   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.596155   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596173   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.596184   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.596201   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.596345   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.596378   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596393   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.596406   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.596415   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.596536   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.596579   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596597   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.596672   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.596706   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.596716   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.766149   54573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.73803779s)
	I0717 22:51:40.766218   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.766233   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.766573   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.766619   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.766629   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.766639   54573 main.go:141] libmachine: Making call to close driver server
	I0717 22:51:40.766648   54573 main.go:141] libmachine: (no-preload-935524) Calling .Close
	I0717 22:51:40.766954   54573 main.go:141] libmachine: (no-preload-935524) DBG | Closing plugin on server side
	I0717 22:51:40.766987   54573 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:51:40.766996   54573 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:51:40.767004   54573 addons.go:467] Verifying addon metrics-server=true in "no-preload-935524"
	I0717 22:51:40.921642   54573 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 22:51:40.099354   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:51:40.099395   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:51:40.600101   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:40.606334   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:40.606375   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:41.100086   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:41.110410   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:41.110443   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:41.599684   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:41.615650   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:41.615693   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:42.100229   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:42.109347   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 22:51:42.109400   54649 api_server.go:103] status: https://192.168.72.118:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 22:51:42.600180   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 22:51:42.607799   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 200:
	ok
	I0717 22:51:42.621454   54649 api_server.go:141] control plane version: v1.27.3
	I0717 22:51:42.621480   54649 api_server.go:131] duration metric: took 6.983866635s to wait for apiserver health ...
	I0717 22:51:42.621491   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:51:42.621503   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:42.623222   54649 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:51:41.140227   54573 addons.go:502] enable addons completed in 2.502347716s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 22:51:41.154857   54573 node_ready.go:58] node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:40.037161   53870 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.706262393s)
	I0717 22:51:40.037203   53870 crio.go:451] Took 3.706370 seconds to extract the tarball
	I0717 22:51:40.037215   53870 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:51:40.089356   53870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:51:40.143494   53870 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0717 22:51:40.143520   53870 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 22:51:40.143582   53870 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:40.143803   53870 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 22:51:40.143819   53870 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.143889   53870 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.143972   53870 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.143979   53870 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.144036   53870 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.144084   53870 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.151367   53870 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:40.151467   53870 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 22:51:40.152588   53870 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.152741   53870 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.152887   53870 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.152985   53870 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.153357   53870 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.153384   53870 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.317883   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.322240   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.325883   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 22:51:40.325883   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.326725   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.328193   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.356171   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.485259   53870 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:51:40.493227   53870 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 22:51:40.493266   53870 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.493304   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.514366   53870 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 22:51:40.514409   53870 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.514459   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578201   53870 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 22:51:40.578304   53870 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.578312   53870 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 22:51:40.578342   53870 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.578363   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578396   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578451   53870 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 22:51:40.578485   53870 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.578534   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578248   53870 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 22:51:40.578638   53870 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0717 22:51:40.578247   53870 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 22:51:40.578717   53870 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.578756   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.578688   53870 ssh_runner.go:195] Run: which crictl
	I0717 22:51:40.717404   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0717 22:51:40.717482   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 22:51:40.717627   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 22:51:40.717740   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 22:51:40.717814   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0717 22:51:40.717918   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0717 22:51:40.718015   53870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 22:51:40.856246   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 22:51:40.856291   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 22:51:40.856403   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 22:51:40.856438   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 22:51:40.856526   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 22:51:40.856575   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 22:51:40.856604   53870 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 22:51:40.856653   53870 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0717 22:51:40.861702   53870 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0717 22:51:40.861718   53870 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0717 22:51:40.861766   53870 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0717 22:51:42.019439   53870 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.157649631s)
	I0717 22:51:42.019471   53870 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0717 22:51:42.019512   53870 cache_images.go:92] LoadImages completed in 1.875976905s
	W0717 22:51:42.019588   53870 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16899-15759/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0717 22:51:42.019667   53870 ssh_runner.go:195] Run: crio config
	I0717 22:51:42.084276   53870 cni.go:84] Creating CNI manager for ""
	I0717 22:51:42.084310   53870 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:51:42.084329   53870 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:51:42.084352   53870 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.149 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-332820 NodeName:old-k8s-version-332820 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 22:51:42.084534   53870 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-332820"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-332820
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.149:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:51:42.084631   53870 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-332820 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-332820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:51:42.084705   53870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 22:51:42.095493   53870 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:51:42.095576   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:51:42.106777   53870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 22:51:42.126860   53870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:51:42.146610   53870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0717 22:51:42.167959   53870 ssh_runner.go:195] Run: grep 192.168.50.149	control-plane.minikube.internal$ /etc/hosts
	I0717 22:51:42.171993   53870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:51:42.188635   53870 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820 for IP: 192.168.50.149
	I0717 22:51:42.188673   53870 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:51:42.188887   53870 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 22:51:42.188945   53870 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 22:51:42.189042   53870 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.key
	I0717 22:51:42.189125   53870 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/apiserver.key.7e281e16
	I0717 22:51:42.189177   53870 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/proxy-client.key
	I0717 22:51:42.189322   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 22:51:42.189362   53870 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 22:51:42.189377   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 22:51:42.189413   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:51:42.189456   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:51:42.189502   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 22:51:42.189590   53870 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 22:51:42.190495   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:51:42.219201   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 22:51:42.248355   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:51:42.275885   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 22:51:42.303987   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:51:42.329331   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 22:51:42.354424   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:51:42.386422   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:51:42.418872   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 22:51:42.448869   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:51:42.473306   53870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 22:51:42.499302   53870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:51:42.519833   53870 ssh_runner.go:195] Run: openssl version
	I0717 22:51:42.525933   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 22:51:42.537165   53870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 22:51:42.545354   53870 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 22:51:42.545419   53870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 22:51:42.551786   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:51:42.561900   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:51:42.571880   53870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:42.576953   53870 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:42.577017   53870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:51:42.583311   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:51:42.593618   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 22:51:42.604326   53870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 22:51:42.610022   53870 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 22:51:42.610084   53870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 22:51:42.615999   53870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 22:51:42.627353   53870 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:51:42.632186   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 22:51:42.638738   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 22:51:42.645118   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 22:51:42.651619   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 22:51:42.658542   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 22:51:42.665449   53870 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 22:51:42.673656   53870 kubeadm.go:404] StartCluster: {Name:old-k8s-version-332820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-332820 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.149 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:51:42.673776   53870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:51:42.673832   53870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:42.718032   53870 cri.go:89] found id: ""
	I0717 22:51:42.718127   53870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:51:42.731832   53870 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 22:51:42.731856   53870 kubeadm.go:636] restartCluster start
	I0717 22:51:42.731907   53870 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 22:51:42.741531   53870 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:42.743035   53870 kubeconfig.go:92] found "old-k8s-version-332820" server: "https://192.168.50.149:8443"
	I0717 22:51:42.746440   53870 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 22:51:42.755816   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:42.755878   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:42.768767   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:39.384892   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:41.876361   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:42.624643   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:51:42.660905   54649 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:51:42.733831   54649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:51:42.761055   54649 system_pods.go:59] 8 kube-system pods found
	I0717 22:51:42.761093   54649 system_pods.go:61] "coredns-5d78c9869d-wpmhl" [ebfdf1a8-16b1-4e11-8bda-0b6afa127ed2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 22:51:42.761113   54649 system_pods.go:61] "etcd-default-k8s-diff-port-504828" [47338c6f-2509-4051-acaa-7281bbafe376] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 22:51:42.761125   54649 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504828" [16961d82-f852-4c99-81af-a5b6290222d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 22:51:42.761138   54649 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504828" [9e226305-9f41-4e56-8f8d-a250f46ab852] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 22:51:42.761165   54649 system_pods.go:61] "kube-proxy-kbp9x" [5a581d9c-4efa-49b7-8bd9-b877d5d12871] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 22:51:42.761183   54649 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504828" [0d63a508-5b2b-4b61-b087-afdd063afbfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 22:51:42.761197   54649 system_pods.go:61] "metrics-server-74d5c6b9c-tj4st" [2cd90033-b07a-4458-8dac-5a618d4ed7ce] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:51:42.761207   54649 system_pods.go:61] "storage-provisioner" [c306122c-f32a-4455-a825-3e272a114ddc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 22:51:42.761217   54649 system_pods.go:74] duration metric: took 27.36753ms to wait for pod list to return data ...
	I0717 22:51:42.761226   54649 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:51:42.766615   54649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:51:42.766640   54649 node_conditions.go:123] node cpu capacity is 2
	I0717 22:51:42.766651   54649 node_conditions.go:105] duration metric: took 5.41582ms to run NodePressure ...
	I0717 22:51:42.766666   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:43.144614   54649 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:51:43.151192   54649 kubeadm.go:787] kubelet initialised
	I0717 22:51:43.151229   54649 kubeadm.go:788] duration metric: took 6.579448ms waiting for restarted kubelet to initialise ...
	I0717 22:51:43.151245   54649 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:43.157867   54649 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:45.174145   54649 pod_ready.go:102] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:47.177320   54649 pod_ready.go:102] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:43.656678   54573 node_ready.go:58] node "no-preload-935524" has status "Ready":"False"
	I0717 22:51:46.154037   54573 node_ready.go:49] node "no-preload-935524" has status "Ready":"True"
	I0717 22:51:46.154060   54573 node_ready.go:38] duration metric: took 7.008304923s waiting for node "no-preload-935524" to be "Ready" ...
	I0717 22:51:46.154068   54573 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:51:46.161581   54573 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:46.167554   54573 pod_ready.go:92] pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:46.167581   54573 pod_ready.go:81] duration metric: took 5.973951ms waiting for pod "coredns-5d78c9869d-2mpst" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:46.167593   54573 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:43.269246   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:43.269363   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:43.281553   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:43.769539   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:43.769648   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:43.784373   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:44.268932   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:44.269030   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:44.280678   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:44.769180   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:44.769268   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:44.782107   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:45.269718   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:45.269795   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:45.282616   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:45.768937   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:45.769014   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:45.782121   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:46.269531   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:46.269628   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:46.281901   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:46.769344   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:46.769437   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:46.784477   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:47.268980   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:47.269070   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:47.280858   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:47.769478   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:47.769577   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:47.783095   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:44.373907   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:46.375240   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:49.671705   54649 pod_ready.go:102] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:50.172053   54649 pod_ready.go:92] pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:50.172081   54649 pod_ready.go:81] duration metric: took 7.014190645s waiting for pod "coredns-5d78c9869d-wpmhl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:50.172094   54649 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:52.186327   54649 pod_ready.go:102] pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:48.180621   54573 pod_ready.go:92] pod "etcd-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.180653   54573 pod_ready.go:81] duration metric: took 2.0130508s waiting for pod "etcd-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.180666   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.185965   54573 pod_ready.go:92] pod "kube-apiserver-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.185985   54573 pod_ready.go:81] duration metric: took 5.310471ms waiting for pod "kube-apiserver-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.185996   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.191314   54573 pod_ready.go:92] pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.191335   54573 pod_ready.go:81] duration metric: took 5.331248ms waiting for pod "kube-controller-manager-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.191346   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.197557   54573 pod_ready.go:92] pod "kube-proxy-qhp66" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:48.197576   54573 pod_ready.go:81] duration metric: took 6.222911ms waiting for pod "kube-proxy-qhp66" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:48.197586   54573 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:50.567470   54573 pod_ready.go:92] pod "kube-scheduler-no-preload-935524" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:50.567494   54573 pod_ready.go:81] duration metric: took 2.369900836s waiting for pod "kube-scheduler-no-preload-935524" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:50.567504   54573 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:52.582697   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:48.269386   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:48.269464   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:48.281178   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:48.769171   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:48.769255   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:48.781163   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:49.269813   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:49.269890   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:49.282099   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:49.769555   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:49.769659   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:49.782298   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:50.269111   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:50.269176   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:50.280805   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:50.769333   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:50.769438   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:50.781760   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:51.269299   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:51.269368   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:51.281559   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:51.769032   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:51.769096   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:51.780505   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:52.269033   53870 api_server.go:166] Checking apiserver status ...
	I0717 22:51:52.269134   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 22:51:52.281362   53870 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 22:51:52.755841   53870 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 22:51:52.755871   53870 kubeadm.go:1128] stopping kube-system containers ...
	I0717 22:51:52.755882   53870 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 22:51:52.755945   53870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:51:52.789292   53870 cri.go:89] found id: ""
	I0717 22:51:52.789370   53870 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 22:51:52.805317   53870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:51:52.814714   53870 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:51:52.814778   53870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:52.824024   53870 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 22:51:52.824045   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:48.376709   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:50.877922   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:54.187055   54649 pod_ready.go:92] pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.187076   54649 pod_ready.go:81] duration metric: took 4.01497478s waiting for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.187084   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.195396   54649 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.195426   54649 pod_ready.go:81] duration metric: took 8.33448ms waiting for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.195440   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.205666   54649 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.205694   54649 pod_ready.go:81] duration metric: took 10.243213ms waiting for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.205713   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kbp9x" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.217007   54649 pod_ready.go:92] pod "kube-proxy-kbp9x" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.217030   54649 pod_ready.go:81] duration metric: took 11.309771ms waiting for pod "kube-proxy-kbp9x" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.217041   54649 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.225509   54649 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:51:54.225558   54649 pod_ready.go:81] duration metric: took 8.507279ms waiting for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:54.225572   54649 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace to be "Ready" ...
	I0717 22:51:56.592993   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:54.582860   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:56.583634   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:52.949663   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:53.985430   53870 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.035733754s)
	I0717 22:51:53.985459   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:54.222833   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:54.357196   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:51:54.468442   53870 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:51:54.468516   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:54.999095   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:55.499700   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:55.999447   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:51:56.051829   53870 api_server.go:72] duration metric: took 1.583387644s to wait for apiserver process to appear ...
	I0717 22:51:56.051856   53870 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:51:56.051872   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:51:53.374486   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:55.375033   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:57.376561   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:59.093181   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:01.592585   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:51:59.084169   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:01.583540   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:01.053643   53870 api_server.go:269] stopped: https://192.168.50.149:8443/healthz: Get "https://192.168.50.149:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 22:52:01.554418   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:01.627371   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 22:52:01.627400   53870 api_server.go:103] status: https://192.168.50.149:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 22:52:02.054761   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:02.060403   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 22:52:02.060431   53870 api_server.go:103] status: https://192.168.50.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 22:52:02.554085   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:02.561664   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0717 22:52:02.561699   53870 api_server.go:103] status: https://192.168.50.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0717 22:51:59.876307   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:02.374698   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:03.054028   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:52:03.061055   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 200:
	ok
	I0717 22:52:03.069434   53870 api_server.go:141] control plane version: v1.16.0
	I0717 22:52:03.069465   53870 api_server.go:131] duration metric: took 7.017602055s to wait for apiserver health ...
	I0717 22:52:03.069475   53870 cni.go:84] Creating CNI manager for ""
	I0717 22:52:03.069485   53870 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:52:03.071306   53870 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:52:04.092490   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:06.592435   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:04.082787   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:06.089097   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:03.073009   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:52:03.085399   53870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:52:03.106415   53870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:52:03.117136   53870 system_pods.go:59] 7 kube-system pods found
	I0717 22:52:03.117181   53870 system_pods.go:61] "coredns-5644d7b6d9-s9vtg" [7a1ccabb-ad03-47ef-804a-eff0b00ea65c] Running
	I0717 22:52:03.117191   53870 system_pods.go:61] "etcd-old-k8s-version-332820" [a1c2ef8d-fdb3-4394-944b-042870d25c4b] Running
	I0717 22:52:03.117198   53870 system_pods.go:61] "kube-apiserver-old-k8s-version-332820" [39a09f85-abd5-442a-887d-c04a91b87258] Running
	I0717 22:52:03.117206   53870 system_pods.go:61] "kube-controller-manager-old-k8s-version-332820" [94c599c4-d22c-4b5e-bf7b-ce0b81e21283] Running
	I0717 22:52:03.117212   53870 system_pods.go:61] "kube-proxy-vkjpn" [8fe8844c-f199-4bcb-b6a0-c6023c06ef75] Running
	I0717 22:52:03.117219   53870 system_pods.go:61] "kube-scheduler-old-k8s-version-332820" [a2102927-3de6-45d8-a37e-665adde8ca47] Running
	I0717 22:52:03.117227   53870 system_pods.go:61] "storage-provisioner" [b9bcb25d-294e-49ae-8650-98b1c7e5b4f8] Running
	I0717 22:52:03.117234   53870 system_pods.go:74] duration metric: took 10.793064ms to wait for pod list to return data ...
	I0717 22:52:03.117247   53870 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:52:03.122227   53870 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:52:03.122275   53870 node_conditions.go:123] node cpu capacity is 2
	I0717 22:52:03.122294   53870 node_conditions.go:105] duration metric: took 5.039156ms to run NodePressure ...
	I0717 22:52:03.122322   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 22:52:03.337823   53870 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 22:52:03.342104   53870 retry.go:31] will retry after 190.852011ms: kubelet not initialised
	I0717 22:52:03.537705   53870 retry.go:31] will retry after 190.447443ms: kubelet not initialised
	I0717 22:52:03.735450   53870 retry.go:31] will retry after 294.278727ms: kubelet not initialised
	I0717 22:52:04.034965   53870 retry.go:31] will retry after 808.339075ms: kubelet not initialised
	I0717 22:52:04.847799   53870 retry.go:31] will retry after 1.685522396s: kubelet not initialised
	I0717 22:52:06.537765   53870 retry.go:31] will retry after 1.595238483s: kubelet not initialised
	I0717 22:52:04.377461   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:06.876135   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:09.090739   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:11.093234   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:08.583118   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:11.083446   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:08.139297   53870 retry.go:31] will retry after 4.170190829s: kubelet not initialised
	I0717 22:52:12.317346   53870 retry.go:31] will retry after 5.652204651s: kubelet not initialised
	I0717 22:52:09.374610   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:11.375332   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:13.590999   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:15.591041   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:13.583868   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:16.081948   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:13.376027   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:15.874857   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:17.876130   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:17.593544   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:20.092121   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:18.082068   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:20.083496   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:22.582358   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:17.975640   53870 retry.go:31] will retry after 6.695949238s: kubelet not initialised
	I0717 22:52:20.375494   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:22.882209   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:22.591705   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:25.090965   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:25.082268   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:27.582422   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:24.676746   53870 retry.go:31] will retry after 10.942784794s: kubelet not initialised
	I0717 22:52:25.374526   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:27.375728   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:27.591516   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:30.091872   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:30.081334   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:32.082535   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:29.874508   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:31.876648   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:32.592067   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:35.092067   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:34.082954   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:36.585649   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:35.625671   53870 retry.go:31] will retry after 20.23050626s: kubelet not initialised
	I0717 22:52:34.376118   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:36.875654   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:37.592201   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:40.091539   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:39.081430   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:41.082360   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:39.374867   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:41.375759   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:42.590417   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:44.591742   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:46.593256   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:43.083211   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:45.084404   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:47.085099   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:43.376030   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:45.873482   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:47.875479   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:49.092376   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:51.592430   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:49.582087   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:52.083003   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:49.878981   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:52.374685   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:54.090617   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:56.091597   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:54.583455   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:57.081342   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:55.864261   53870 kubeadm.go:787] kubelet initialised
	I0717 22:52:55.864281   53870 kubeadm.go:788] duration metric: took 52.526433839s waiting for restarted kubelet to initialise ...
	I0717 22:52:55.864287   53870 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:52:55.870685   53870 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.877709   53870 pod_ready.go:92] pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.877737   53870 pod_ready.go:81] duration metric: took 7.026411ms waiting for pod "coredns-5644d7b6d9-s9vtg" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.877750   53870 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-vnldz" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.883932   53870 pod_ready.go:92] pod "coredns-5644d7b6d9-vnldz" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.883961   53870 pod_ready.go:81] duration metric: took 6.200731ms waiting for pod "coredns-5644d7b6d9-vnldz" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.883974   53870 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.889729   53870 pod_ready.go:92] pod "etcd-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.889749   53870 pod_ready.go:81] duration metric: took 5.767797ms waiting for pod "etcd-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.889757   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.895286   53870 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:55.895308   53870 pod_ready.go:81] duration metric: took 5.545198ms waiting for pod "kube-apiserver-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:55.895316   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.263125   53870 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:56.263153   53870 pod_ready.go:81] duration metric: took 367.829768ms waiting for pod "kube-controller-manager-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.263166   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vkjpn" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.663235   53870 pod_ready.go:92] pod "kube-proxy-vkjpn" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:56.663262   53870 pod_ready.go:81] duration metric: took 400.086969ms waiting for pod "kube-proxy-vkjpn" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:56.663276   53870 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:57.061892   53870 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-332820" in "kube-system" namespace has status "Ready":"True"
	I0717 22:52:57.061917   53870 pod_ready.go:81] duration metric: took 398.633591ms waiting for pod "kube-scheduler-old-k8s-version-332820" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:57.061930   53870 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace to be "Ready" ...
	I0717 22:52:54.374907   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:56.875242   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:58.092082   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:00.590626   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:59.081826   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:01.086158   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:59.469353   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:01.968383   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:52:59.374420   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:01.374640   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:02.595710   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:05.094211   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:03.582006   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:05.582348   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:07.582585   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:03.969801   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:06.469220   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:03.374665   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:05.375182   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:07.874673   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:07.593189   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:10.091253   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:10.083277   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:12.581195   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:08.973101   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:11.471187   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:10.375255   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:12.875038   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:12.593192   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:15.090204   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:17.091416   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:14.581962   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:17.082092   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:13.970246   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:16.469918   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:15.374678   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:17.375402   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:19.592518   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:22.090462   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:19.582582   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:21.582788   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:18.969975   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:21.471221   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:19.876416   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:22.377064   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:24.592012   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:26.593013   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:24.082409   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:26.581889   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:23.967680   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:25.969061   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:24.876092   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:26.876727   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:29.090937   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:31.092276   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:28.583371   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:30.588656   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:28.470667   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:30.969719   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:29.374066   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:31.375107   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.590361   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.591199   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.082794   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.583369   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.468669   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.468917   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:37.469656   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:33.873830   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:35.875551   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:38.091032   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:40.095610   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:38.083632   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:40.584069   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:39.970389   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:41.972121   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:38.374344   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:40.375117   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:42.873817   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:42.591348   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:44.591801   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:47.091463   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:43.092800   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:45.583147   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:44.468092   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:46.968583   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:44.875165   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:46.875468   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:49.592016   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:52.092191   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:48.082358   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:50.581430   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:52.581722   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:48.970562   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:51.469666   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:49.374655   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:51.374912   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:54.590857   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:57.090986   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:54.581979   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:57.081602   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:53.969845   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:56.470092   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:53.874630   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:56.374076   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:59.093019   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:01.590296   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:59.581481   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:02.081651   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:58.969243   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:00.969793   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:53:58.874500   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:00.875485   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:03.591663   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:06.091377   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:04.082661   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:06.581409   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:02.969900   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:05.469513   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:07.469630   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:03.374576   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:05.874492   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:07.876025   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:08.092299   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:10.591576   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:08.582962   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:11.081623   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:09.469674   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:11.970568   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:09.878298   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:12.375542   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:13.089815   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:15.091295   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:13.082485   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:15.582545   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:14.469264   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:16.970184   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:14.876188   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:17.375197   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:17.590457   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:19.590668   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:21.592281   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:18.082882   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:20.581232   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:22.581451   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:19.470007   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:21.972545   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:19.874905   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:21.876111   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.090912   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:26.091423   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.582104   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:27.082466   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.468612   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:26.468733   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:24.375195   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:26.375302   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:28.092426   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:30.590750   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:29.083200   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:31.581109   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:28.469411   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:30.474485   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:28.376063   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:30.874877   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:32.875720   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:32.591688   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:34.592382   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:37.091435   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:33.582072   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:35.582710   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:32.968863   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:34.969408   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:37.469461   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:35.375657   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:37.873420   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:39.091786   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:41.591723   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:38.082103   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:40.582480   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:39.470591   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:41.969425   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:39.876026   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:41.876450   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:44.090732   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:46.091209   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:43.082746   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:45.580745   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:47.581165   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:44.469624   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:46.469853   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:44.375526   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:46.874381   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:48.091542   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:50.591973   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:49.583795   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:52.084521   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:48.969202   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:50.969996   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:48.874772   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:50.876953   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:53.092284   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:55.591945   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:54.582260   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:56.582456   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:53.468921   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:55.469467   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:57.469588   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:53.375369   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:55.375834   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:57.875412   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:58.092340   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:00.593507   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:58.582790   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:01.082714   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:59.968899   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:01.970513   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:54:59.876100   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:02.377093   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:02.594240   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:05.091858   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:03.584934   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:06.082560   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:04.469605   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:06.470074   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:04.874495   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:06.874619   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:07.591151   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:09.594253   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:12.092136   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:08.082731   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:10.594934   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:08.970358   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:10.972021   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:08.875055   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:10.875177   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:11.360474   54248 pod_ready.go:81] duration metric: took 4m0.00020957s waiting for pod "metrics-server-74d5c6b9c-jl7jl" in "kube-system" namespace to be "Ready" ...
	E0717 22:55:11.360506   54248 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:55:11.360523   54248 pod_ready.go:38] duration metric: took 4m12.083431067s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:55:11.360549   54248 kubeadm.go:640] restartCluster took 4m32.267522493s
	W0717 22:55:11.360621   54248 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 22:55:11.360653   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 22:55:14.094015   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:16.590201   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:13.082448   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:15.581674   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:17.582135   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:13.471096   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:15.970057   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:18.591981   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:21.091787   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:19.584462   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:22.082310   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:18.469828   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:20.970377   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:23.092278   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:25.594454   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:24.583377   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:27.082479   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:23.470427   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:25.473350   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:28.091878   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:30.092032   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:29.582576   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:31.584147   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:27.969045   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:30.468478   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:32.469942   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:32.591274   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:34.591477   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:37.089772   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:34.082460   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:36.082687   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:34.470431   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:36.470791   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:39.091253   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:41.091286   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:38.082836   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:40.581494   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:42.583634   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:38.969011   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:40.969922   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:43.092434   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:45.591302   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:45.083869   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:47.582454   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:43.468968   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:45.469340   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:47.471805   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:43.113858   54248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.753186356s)
	I0717 22:55:43.113920   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:55:43.128803   54248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:55:43.138891   54248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:55:43.148155   54248 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:55:43.148209   54248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 22:55:43.357368   54248 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:55:47.591967   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:50.092046   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:52.092670   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:50.081152   54573 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:50.568456   54573 pod_ready.go:81] duration metric: took 4m0.000934324s waiting for pod "metrics-server-74d5c6b9c-tlbpl" in "kube-system" namespace to be "Ready" ...
	E0717 22:55:50.568492   54573 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:55:50.568506   54573 pod_ready.go:38] duration metric: took 4m4.414427298s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:55:50.568531   54573 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:55:50.568581   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 22:55:50.568650   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 22:55:50.622016   54573 cri.go:89] found id: "c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:50.622048   54573 cri.go:89] found id: ""
	I0717 22:55:50.622058   54573 logs.go:284] 1 containers: [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f]
	I0717 22:55:50.622114   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.627001   54573 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 22:55:50.627065   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 22:55:50.665053   54573 cri.go:89] found id: "98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:50.665073   54573 cri.go:89] found id: ""
	I0717 22:55:50.665082   54573 logs.go:284] 1 containers: [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea]
	I0717 22:55:50.665143   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.670198   54573 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 22:55:50.670261   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 22:55:50.705569   54573 cri.go:89] found id: "acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:50.705595   54573 cri.go:89] found id: ""
	I0717 22:55:50.705604   54573 logs.go:284] 1 containers: [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266]
	I0717 22:55:50.705669   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.710494   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 22:55:50.710569   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 22:55:50.772743   54573 cri.go:89] found id: "692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:50.772768   54573 cri.go:89] found id: ""
	I0717 22:55:50.772776   54573 logs.go:284] 1 containers: [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629]
	I0717 22:55:50.772831   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.777741   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 22:55:50.777813   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 22:55:50.809864   54573 cri.go:89] found id: "9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:50.809892   54573 cri.go:89] found id: ""
	I0717 22:55:50.809903   54573 logs.go:284] 1 containers: [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567]
	I0717 22:55:50.809963   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.814586   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 22:55:50.814654   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 22:55:50.850021   54573 cri.go:89] found id: "f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:50.850047   54573 cri.go:89] found id: ""
	I0717 22:55:50.850056   54573 logs.go:284] 1 containers: [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c]
	I0717 22:55:50.850125   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.854615   54573 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 22:55:50.854685   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 22:55:50.893272   54573 cri.go:89] found id: ""
	I0717 22:55:50.893300   54573 logs.go:284] 0 containers: []
	W0717 22:55:50.893310   54573 logs.go:286] No container was found matching "kindnet"
	I0717 22:55:50.893318   54573 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 22:55:50.893377   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 22:55:50.926652   54573 cri.go:89] found id: "a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:50.926676   54573 cri.go:89] found id: "4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:50.926682   54573 cri.go:89] found id: ""
	I0717 22:55:50.926690   54573 logs.go:284] 2 containers: [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6]
	I0717 22:55:50.926747   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.931220   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:50.935745   54573 logs.go:123] Gathering logs for kubelet ...
	I0717 22:55:50.935772   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 22:55:51.002727   54573 logs.go:123] Gathering logs for kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] ...
	I0717 22:55:51.002760   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:51.046774   54573 logs.go:123] Gathering logs for kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] ...
	I0717 22:55:51.046811   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:51.081441   54573 logs.go:123] Gathering logs for storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] ...
	I0717 22:55:51.081472   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:51.119354   54573 logs.go:123] Gathering logs for CRI-O ...
	I0717 22:55:51.119394   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 22:55:51.710591   54573 logs.go:123] Gathering logs for kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] ...
	I0717 22:55:51.710634   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:51.758647   54573 logs.go:123] Gathering logs for storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] ...
	I0717 22:55:51.758679   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:51.792417   54573 logs.go:123] Gathering logs for container status ...
	I0717 22:55:51.792458   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 22:55:51.836268   54573 logs.go:123] Gathering logs for dmesg ...
	I0717 22:55:51.836302   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 22:55:51.852009   54573 logs.go:123] Gathering logs for describe nodes ...
	I0717 22:55:51.852038   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 22:55:52.018156   54573 logs.go:123] Gathering logs for kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] ...
	I0717 22:55:52.018191   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:52.061680   54573 logs.go:123] Gathering logs for etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] ...
	I0717 22:55:52.061723   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:52.105407   54573 logs.go:123] Gathering logs for coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] ...
	I0717 22:55:52.105437   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:49.969074   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:51.969157   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:54.934299   54248 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 22:55:54.934395   54248 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:55:54.934498   54248 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:55:54.934616   54248 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:55:54.934741   54248 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:55:54.934823   54248 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:55:54.936386   54248 out.go:204]   - Generating certificates and keys ...
	I0717 22:55:54.936475   54248 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:55:54.936548   54248 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:55:54.936643   54248 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:55:54.936719   54248 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 22:55:54.936803   54248 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:55:54.936871   54248 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 22:55:54.936947   54248 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 22:55:54.937023   54248 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:55:54.937125   54248 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:55:54.937219   54248 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:55:54.937269   54248 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 22:55:54.937333   54248 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:55:54.937395   54248 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:55:54.937460   54248 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:55:54.937551   54248 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:55:54.937620   54248 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:55:54.937744   54248 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:55:54.937846   54248 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:55:54.937894   54248 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:55:54.937990   54248 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:55:54.939409   54248 out.go:204]   - Booting up control plane ...
	I0717 22:55:54.939534   54248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:55:54.939640   54248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:55:54.939733   54248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:55:54.939867   54248 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:55:54.940059   54248 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:55:54.940157   54248 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504894 seconds
	I0717 22:55:54.940283   54248 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:55:54.940445   54248 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:55:54.940525   54248 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:55:54.940756   54248 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-571296 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:55:54.940829   54248 kubeadm.go:322] [bootstrap-token] Using token: zn3d72.w9x4plx1baw35867
	I0717 22:55:54.942338   54248 out.go:204]   - Configuring RBAC rules ...
	I0717 22:55:54.942484   54248 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:55:54.942583   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:55:54.942759   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:55:54.942920   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:55:54.943088   54248 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:55:54.943207   54248 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:55:54.943365   54248 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:55:54.943433   54248 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:55:54.943527   54248 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:55:54.943541   54248 kubeadm.go:322] 
	I0717 22:55:54.943646   54248 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:55:54.943673   54248 kubeadm.go:322] 
	I0717 22:55:54.943765   54248 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:55:54.943774   54248 kubeadm.go:322] 
	I0717 22:55:54.943814   54248 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:55:54.943906   54248 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:55:54.943997   54248 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:55:54.944009   54248 kubeadm.go:322] 
	I0717 22:55:54.944107   54248 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 22:55:54.944121   54248 kubeadm.go:322] 
	I0717 22:55:54.944194   54248 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:55:54.944204   54248 kubeadm.go:322] 
	I0717 22:55:54.944277   54248 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:55:54.944390   54248 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:55:54.944472   54248 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:55:54.944479   54248 kubeadm.go:322] 
	I0717 22:55:54.944574   54248 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:55:54.944667   54248 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:55:54.944677   54248 kubeadm.go:322] 
	I0717 22:55:54.944778   54248 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zn3d72.w9x4plx1baw35867 \
	I0717 22:55:54.944924   54248 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:55:54.944959   54248 kubeadm.go:322] 	--control-plane 
	I0717 22:55:54.944965   54248 kubeadm.go:322] 
	I0717 22:55:54.945096   54248 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:55:54.945110   54248 kubeadm.go:322] 
	I0717 22:55:54.945206   54248 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zn3d72.w9x4plx1baw35867 \
	I0717 22:55:54.945367   54248 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:55:54.945384   54248 cni.go:84] Creating CNI manager for ""
	I0717 22:55:54.945396   54248 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:55:54.947694   54248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:55:54.092792   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:54.226690   54649 pod_ready.go:81] duration metric: took 4m0.00109908s waiting for pod "metrics-server-74d5c6b9c-tj4st" in "kube-system" namespace to be "Ready" ...
	E0717 22:55:54.226723   54649 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:55:54.226748   54649 pod_ready.go:38] duration metric: took 4m11.075490865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:55:54.226791   54649 kubeadm.go:640] restartCluster took 4m33.196357187s
	W0717 22:55:54.226860   54649 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 22:55:54.226891   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 22:55:54.639076   54573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:55:54.659284   54573 api_server.go:72] duration metric: took 4m16.00305446s to wait for apiserver process to appear ...
	I0717 22:55:54.659324   54573 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:55:54.659366   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 22:55:54.659437   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 22:55:54.698007   54573 cri.go:89] found id: "c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:54.698036   54573 cri.go:89] found id: ""
	I0717 22:55:54.698045   54573 logs.go:284] 1 containers: [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f]
	I0717 22:55:54.698104   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.704502   54573 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 22:55:54.704584   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 22:55:54.738722   54573 cri.go:89] found id: "98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:54.738752   54573 cri.go:89] found id: ""
	I0717 22:55:54.738761   54573 logs.go:284] 1 containers: [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea]
	I0717 22:55:54.738816   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.743815   54573 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 22:55:54.743888   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 22:55:54.789962   54573 cri.go:89] found id: "acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:54.789992   54573 cri.go:89] found id: ""
	I0717 22:55:54.790003   54573 logs.go:284] 1 containers: [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266]
	I0717 22:55:54.790061   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.796502   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 22:55:54.796577   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 22:55:54.840319   54573 cri.go:89] found id: "692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:54.840349   54573 cri.go:89] found id: ""
	I0717 22:55:54.840358   54573 logs.go:284] 1 containers: [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629]
	I0717 22:55:54.840418   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.847001   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 22:55:54.847074   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 22:55:54.900545   54573 cri.go:89] found id: "9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:54.900571   54573 cri.go:89] found id: ""
	I0717 22:55:54.900578   54573 logs.go:284] 1 containers: [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567]
	I0717 22:55:54.900639   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.905595   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 22:55:54.905703   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 22:55:54.940386   54573 cri.go:89] found id: "f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:54.940405   54573 cri.go:89] found id: ""
	I0717 22:55:54.940414   54573 logs.go:284] 1 containers: [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c]
	I0717 22:55:54.940471   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:54.947365   54573 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 22:55:54.947444   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 22:55:54.993902   54573 cri.go:89] found id: ""
	I0717 22:55:54.993930   54573 logs.go:284] 0 containers: []
	W0717 22:55:54.993942   54573 logs.go:286] No container was found matching "kindnet"
	I0717 22:55:54.993950   54573 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 22:55:54.994019   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 22:55:55.040159   54573 cri.go:89] found id: "a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:55.040184   54573 cri.go:89] found id: "4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:55.040190   54573 cri.go:89] found id: ""
	I0717 22:55:55.040198   54573 logs.go:284] 2 containers: [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6]
	I0717 22:55:55.040265   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:55.045151   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:55.050805   54573 logs.go:123] Gathering logs for kubelet ...
	I0717 22:55:55.050831   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 22:55:55.123810   54573 logs.go:123] Gathering logs for describe nodes ...
	I0717 22:55:55.123845   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 22:55:55.306589   54573 logs.go:123] Gathering logs for kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] ...
	I0717 22:55:55.306623   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:55.351035   54573 logs.go:123] Gathering logs for kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] ...
	I0717 22:55:55.351083   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:55.416647   54573 logs.go:123] Gathering logs for storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] ...
	I0717 22:55:55.416705   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:55.460413   54573 logs.go:123] Gathering logs for CRI-O ...
	I0717 22:55:55.460452   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 22:55:56.034198   54573 logs.go:123] Gathering logs for container status ...
	I0717 22:55:56.034238   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 22:55:56.073509   54573 logs.go:123] Gathering logs for dmesg ...
	I0717 22:55:56.073552   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 22:55:56.086385   54573 logs.go:123] Gathering logs for kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] ...
	I0717 22:55:56.086413   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:56.132057   54573 logs.go:123] Gathering logs for etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] ...
	I0717 22:55:56.132087   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:56.176634   54573 logs.go:123] Gathering logs for coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] ...
	I0717 22:55:56.176663   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:56.213415   54573 logs.go:123] Gathering logs for kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] ...
	I0717 22:55:56.213451   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:56.248868   54573 logs.go:123] Gathering logs for storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] ...
	I0717 22:55:56.248912   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:53.969902   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:56.470299   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:54.949399   54248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:55:54.984090   54248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:55:55.014819   54248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:55:55.014950   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:55.015014   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=embed-certs-571296 minikube.k8s.io/updated_at=2023_07_17T22_55_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:55.558851   54248 ops.go:34] apiserver oom_adj: -16
	I0717 22:55:55.558970   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:56.177713   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:56.677742   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:57.177957   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:57.677787   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:58.793638   54573 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0717 22:55:58.806705   54573 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0717 22:55:58.808953   54573 api_server.go:141] control plane version: v1.27.3
	I0717 22:55:58.808972   54573 api_server.go:131] duration metric: took 4.149642061s to wait for apiserver health ...
	I0717 22:55:58.808979   54573 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:55:58.808999   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 22:55:58.809042   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 22:55:58.840945   54573 cri.go:89] found id: "c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:58.840965   54573 cri.go:89] found id: ""
	I0717 22:55:58.840972   54573 logs.go:284] 1 containers: [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f]
	I0717 22:55:58.841028   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.845463   54573 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 22:55:58.845557   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 22:55:58.877104   54573 cri.go:89] found id: "98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:55:58.877134   54573 cri.go:89] found id: ""
	I0717 22:55:58.877143   54573 logs.go:284] 1 containers: [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea]
	I0717 22:55:58.877199   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.881988   54573 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 22:55:58.882060   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 22:55:58.920491   54573 cri.go:89] found id: "acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:55:58.920520   54573 cri.go:89] found id: ""
	I0717 22:55:58.920530   54573 logs.go:284] 1 containers: [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266]
	I0717 22:55:58.920588   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.925170   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 22:55:58.925239   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 22:55:58.970908   54573 cri.go:89] found id: "692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:58.970928   54573 cri.go:89] found id: ""
	I0717 22:55:58.970937   54573 logs.go:284] 1 containers: [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629]
	I0717 22:55:58.970988   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:58.976950   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 22:55:58.977005   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 22:55:59.007418   54573 cri.go:89] found id: "9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:55:59.007438   54573 cri.go:89] found id: ""
	I0717 22:55:59.007445   54573 logs.go:284] 1 containers: [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567]
	I0717 22:55:59.007550   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.012222   54573 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 22:55:59.012279   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 22:55:59.048939   54573 cri.go:89] found id: "f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:59.048960   54573 cri.go:89] found id: ""
	I0717 22:55:59.048968   54573 logs.go:284] 1 containers: [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c]
	I0717 22:55:59.049023   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.053335   54573 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 22:55:59.053400   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 22:55:59.084168   54573 cri.go:89] found id: ""
	I0717 22:55:59.084198   54573 logs.go:284] 0 containers: []
	W0717 22:55:59.084208   54573 logs.go:286] No container was found matching "kindnet"
	I0717 22:55:59.084221   54573 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 22:55:59.084270   54573 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 22:55:59.117213   54573 cri.go:89] found id: "a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:55:59.117237   54573 cri.go:89] found id: "4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:59.117244   54573 cri.go:89] found id: ""
	I0717 22:55:59.117252   54573 logs.go:284] 2 containers: [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6]
	I0717 22:55:59.117311   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.122816   54573 ssh_runner.go:195] Run: which crictl
	I0717 22:55:59.127074   54573 logs.go:123] Gathering logs for dmesg ...
	I0717 22:55:59.127095   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 22:55:59.142525   54573 logs.go:123] Gathering logs for kube-apiserver [c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f] ...
	I0717 22:55:59.142557   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c809651d0696dea43c5b2c7e1006750d3faab38b568abf9a48734a82f6dfbc2f"
	I0717 22:55:59.190652   54573 logs.go:123] Gathering logs for kube-scheduler [692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629] ...
	I0717 22:55:59.190690   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 692978c127c580ed7c9afb9c271c150ce9e81e662f117a4c0549e1fbee323629"
	I0717 22:55:59.231512   54573 logs.go:123] Gathering logs for kube-controller-manager [f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c] ...
	I0717 22:55:59.231547   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0b0c765bf6d1d25cfe6d92f752b75efe3c1d4091a312661ae33caef1bbb2c8c"
	I0717 22:55:59.280732   54573 logs.go:123] Gathering logs for storage-provisioner [4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6] ...
	I0717 22:55:59.280767   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d1cbdc04001fbfbcf96d9fece3d243cc4c148beba1392abf4b7ff0e0fbf00b6"
	I0717 22:55:59.318213   54573 logs.go:123] Gathering logs for CRI-O ...
	I0717 22:55:59.318237   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 22:55:59.872973   54573 logs.go:123] Gathering logs for container status ...
	I0717 22:55:59.873017   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 22:55:59.911891   54573 logs.go:123] Gathering logs for kubelet ...
	I0717 22:55:59.911918   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 22:55:59.976450   54573 logs.go:123] Gathering logs for describe nodes ...
	I0717 22:55:59.976483   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 22:56:00.099556   54573 logs.go:123] Gathering logs for etcd [98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea] ...
	I0717 22:56:00.099592   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98d6ff57de0a6696ce0d10a0f2a254db2a44ba93fe126d17f2af819d855766ea"
	I0717 22:56:00.145447   54573 logs.go:123] Gathering logs for coredns [acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266] ...
	I0717 22:56:00.145479   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acfd42b72df4e809b47f076bf031c400ba241eebbd54a24e2be6a9470077c266"
	I0717 22:56:00.181246   54573 logs.go:123] Gathering logs for kube-proxy [9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567] ...
	I0717 22:56:00.181277   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d9c7f49bf24052fa78783bd11fb8fd3312c740b5d9992620a77749e67fc9567"
	I0717 22:56:00.221127   54573 logs.go:123] Gathering logs for storage-provisioner [a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b] ...
	I0717 22:56:00.221150   54573 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a67aa752ac1c939d754088e4bea0014dbea2de3f97c778ef1f1d5522eee8c57b"
	I0717 22:56:02.761729   54573 system_pods.go:59] 8 kube-system pods found
	I0717 22:56:02.761758   54573 system_pods.go:61] "coredns-5d78c9869d-2mpst" [7516b57f-a4cb-4e2f-995e-8e063bed22ae] Running
	I0717 22:56:02.761765   54573 system_pods.go:61] "etcd-no-preload-935524" [b663c4f9-d98e-457d-b511-435bea5e9525] Running
	I0717 22:56:02.761772   54573 system_pods.go:61] "kube-apiserver-no-preload-935524" [fb6f55fc-8705-46aa-a23c-e7870e52e542] Running
	I0717 22:56:02.761778   54573 system_pods.go:61] "kube-controller-manager-no-preload-935524" [37d43d22-e857-4a9b-b0cb-a9fc39931baa] Running
	I0717 22:56:02.761783   54573 system_pods.go:61] "kube-proxy-qhp66" [8bc95955-b7ba-41e3-ac67-604a9695f784] Running
	I0717 22:56:02.761790   54573 system_pods.go:61] "kube-scheduler-no-preload-935524" [86fef4e0-b156-421a-8d53-0d34cae2cdb3] Running
	I0717 22:56:02.761800   54573 system_pods.go:61] "metrics-server-74d5c6b9c-tlbpl" [7c478efe-4435-45dd-a688-745872fc2918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:56:02.761809   54573 system_pods.go:61] "storage-provisioner" [85812d54-7a57-430b-991e-e301f123a86a] Running
	I0717 22:56:02.761823   54573 system_pods.go:74] duration metric: took 3.952838173s to wait for pod list to return data ...
	I0717 22:56:02.761837   54573 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:56:02.764526   54573 default_sa.go:45] found service account: "default"
	I0717 22:56:02.764547   54573 default_sa.go:55] duration metric: took 2.700233ms for default service account to be created ...
	I0717 22:56:02.764556   54573 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:56:02.770288   54573 system_pods.go:86] 8 kube-system pods found
	I0717 22:56:02.770312   54573 system_pods.go:89] "coredns-5d78c9869d-2mpst" [7516b57f-a4cb-4e2f-995e-8e063bed22ae] Running
	I0717 22:56:02.770318   54573 system_pods.go:89] "etcd-no-preload-935524" [b663c4f9-d98e-457d-b511-435bea5e9525] Running
	I0717 22:56:02.770323   54573 system_pods.go:89] "kube-apiserver-no-preload-935524" [fb6f55fc-8705-46aa-a23c-e7870e52e542] Running
	I0717 22:56:02.770327   54573 system_pods.go:89] "kube-controller-manager-no-preload-935524" [37d43d22-e857-4a9b-b0cb-a9fc39931baa] Running
	I0717 22:56:02.770330   54573 system_pods.go:89] "kube-proxy-qhp66" [8bc95955-b7ba-41e3-ac67-604a9695f784] Running
	I0717 22:56:02.770334   54573 system_pods.go:89] "kube-scheduler-no-preload-935524" [86fef4e0-b156-421a-8d53-0d34cae2cdb3] Running
	I0717 22:56:02.770340   54573 system_pods.go:89] "metrics-server-74d5c6b9c-tlbpl" [7c478efe-4435-45dd-a688-745872fc2918] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:56:02.770346   54573 system_pods.go:89] "storage-provisioner" [85812d54-7a57-430b-991e-e301f123a86a] Running
	I0717 22:56:02.770354   54573 system_pods.go:126] duration metric: took 5.793179ms to wait for k8s-apps to be running ...
	I0717 22:56:02.770362   54573 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:56:02.770410   54573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:02.786132   54573 system_svc.go:56] duration metric: took 15.760975ms WaitForService to wait for kubelet.
	I0717 22:56:02.786161   54573 kubeadm.go:581] duration metric: took 4m24.129949995s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:56:02.786182   54573 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:56:02.789957   54573 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:56:02.789978   54573 node_conditions.go:123] node cpu capacity is 2
	I0717 22:56:02.789988   54573 node_conditions.go:105] duration metric: took 3.802348ms to run NodePressure ...
	I0717 22:56:02.789999   54573 start.go:228] waiting for startup goroutines ...
	I0717 22:56:02.790008   54573 start.go:233] waiting for cluster config update ...
	I0717 22:56:02.790021   54573 start.go:242] writing updated cluster config ...
	I0717 22:56:02.790308   54573 ssh_runner.go:195] Run: rm -f paused
	I0717 22:56:02.840154   54573 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 22:56:02.843243   54573 out.go:177] * Done! kubectl is now configured to use "no-preload-935524" cluster and "default" namespace by default
	I0717 22:55:58.471229   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:00.969263   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:55:58.177892   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:58.677211   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:59.177916   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:55:59.678088   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:00.177933   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:00.678096   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:01.177184   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:01.677152   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:02.177561   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:02.677947   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:02.970089   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:05.470783   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:03.177870   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:03.677715   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:04.177238   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:04.677261   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:05.177220   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:05.678164   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:06.177948   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:06.677392   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:07.177167   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:07.678131   54248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:07.945881   54248 kubeadm.go:1081] duration metric: took 12.930982407s to wait for elevateKubeSystemPrivileges.
	I0717 22:56:07.945928   54248 kubeadm.go:406] StartCluster complete in 5m28.89261834s
	I0717 22:56:07.945958   54248 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:07.946058   54248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:56:07.948004   54248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:07.948298   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:56:07.948538   54248 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:56:07.948628   54248 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-571296"
	I0717 22:56:07.948639   54248 config.go:182] Loaded profile config "embed-certs-571296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:56:07.948657   54248 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-571296"
	W0717 22:56:07.948669   54248 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:56:07.948687   54248 addons.go:69] Setting default-storageclass=true in profile "embed-certs-571296"
	I0717 22:56:07.948708   54248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-571296"
	I0717 22:56:07.948713   54248 host.go:66] Checking if "embed-certs-571296" exists ...
	I0717 22:56:07.949078   54248 addons.go:69] Setting metrics-server=true in profile "embed-certs-571296"
	I0717 22:56:07.949100   54248 addons.go:231] Setting addon metrics-server=true in "embed-certs-571296"
	I0717 22:56:07.949101   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	W0717 22:56:07.949107   54248 addons.go:240] addon metrics-server should already be in state true
	I0717 22:56:07.949126   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.949148   54248 host.go:66] Checking if "embed-certs-571296" exists ...
	I0717 22:56:07.949361   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.949390   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.949481   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.949508   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.967136   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I0717 22:56:07.967705   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.967874   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43925
	I0717 22:56:07.968286   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.968317   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.968395   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.968741   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.969000   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.969019   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.969056   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:07.969416   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.969964   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.969993   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.970220   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I0717 22:56:07.970682   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.971172   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.971194   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.971603   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.972617   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:07.972655   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:07.988352   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38131
	I0717 22:56:07.988872   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.989481   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.989507   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.989913   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.990198   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:07.992174   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:56:07.992359   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34283
	I0717 22:56:07.993818   54248 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:56:07.995350   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:56:07.995373   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:56:07.995393   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:56:07.992931   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:07.995909   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:07.995933   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:07.996276   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:07.996424   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:07.998630   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:56:08.000660   54248 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:07.999385   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:07.999983   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:56:08.002498   54248 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:08.002510   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:56:08.002529   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:56:08.002556   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:56:08.002587   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:56:08.002626   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.002714   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:56:08.002874   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:56:08.003290   54248 addons.go:231] Setting addon default-storageclass=true in "embed-certs-571296"
	W0717 22:56:08.003311   54248 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:56:08.003340   54248 host.go:66] Checking if "embed-certs-571296" exists ...
	I0717 22:56:08.003736   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:08.003763   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:08.005771   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.006163   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:56:08.006194   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.006393   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:56:08.006560   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:56:08.006744   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:56:08.006890   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:56:08.025042   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I0717 22:56:08.025743   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:08.026232   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:08.026252   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:08.026732   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:08.027295   54248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:08.027340   54248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:08.044326   54248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40863
	I0717 22:56:08.044743   54248 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:08.045285   54248 main.go:141] libmachine: Using API Version  1
	I0717 22:56:08.045309   54248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:08.045686   54248 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:08.045900   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetState
	I0717 22:56:08.047695   54248 main.go:141] libmachine: (embed-certs-571296) Calling .DriverName
	I0717 22:56:08.047962   54248 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:08.047980   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:56:08.048000   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHHostname
	I0717 22:56:08.050685   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.051084   54248 main.go:141] libmachine: (embed-certs-571296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:4c:e5", ip: ""} in network mk-embed-certs-571296: {Iface:virbr1 ExpiryTime:2023-07-17 23:50:22 +0000 UTC Type:0 Mac:52:54:00:e0:4c:e5 Iaid: IPaddr:192.168.61.179 Prefix:24 Hostname:embed-certs-571296 Clientid:01:52:54:00:e0:4c:e5}
	I0717 22:56:08.051115   54248 main.go:141] libmachine: (embed-certs-571296) DBG | domain embed-certs-571296 has defined IP address 192.168.61.179 and MAC address 52:54:00:e0:4c:e5 in network mk-embed-certs-571296
	I0717 22:56:08.051376   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHPort
	I0717 22:56:08.051561   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHKeyPath
	I0717 22:56:08.051762   54248 main.go:141] libmachine: (embed-certs-571296) Calling .GetSSHUsername
	I0717 22:56:08.051880   54248 sshutil.go:53] new ssh client: &{IP:192.168.61.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/embed-certs-571296/id_rsa Username:docker}
	I0717 22:56:08.221022   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:56:08.221057   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:56:08.262777   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:56:08.286077   54248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:08.301703   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:56:08.301728   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:56:08.314524   54248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:08.370967   54248 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:08.370989   54248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:56:08.585011   54248 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-571296" context rescaled to 1 replicas
	I0717 22:56:08.585061   54248 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.179 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:56:08.587143   54248 out.go:177] * Verifying Kubernetes components...
	I0717 22:56:08.588842   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:08.666555   54248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:10.506154   54248 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.243338067s)
	I0717 22:56:10.506244   54248 start.go:901] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0717 22:56:11.016648   54248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.730514867s)
	I0717 22:56:11.016699   54248 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.427824424s)
	I0717 22:56:11.016659   54248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.702100754s)
	I0717 22:56:11.016728   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.016733   54248 node_ready.go:35] waiting up to 6m0s for node "embed-certs-571296" to be "Ready" ...
	I0717 22:56:11.016742   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.016707   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.016862   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017139   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.017150   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017165   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.017168   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017175   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.017177   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.017183   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.017186   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017196   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.017242   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017409   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017425   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.017443   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.017452   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.017571   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.017600   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.018689   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.018706   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.018703   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.043490   54248 node_ready.go:49] node "embed-certs-571296" has status "Ready":"True"
	I0717 22:56:11.043511   54248 node_ready.go:38] duration metric: took 26.766819ms waiting for node "embed-certs-571296" to be "Ready" ...
	I0717 22:56:11.043518   54248 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:56:11.057095   54248 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-6ljtn" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:11.116641   54248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.450034996s)
	I0717 22:56:11.116706   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.116724   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.117015   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.117034   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.117046   54248 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:11.117058   54248 main.go:141] libmachine: (embed-certs-571296) Calling .Close
	I0717 22:56:11.117341   54248 main.go:141] libmachine: (embed-certs-571296) DBG | Closing plugin on server side
	I0717 22:56:11.117389   54248 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:11.117408   54248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:11.117427   54248 addons.go:467] Verifying addon metrics-server=true in "embed-certs-571296"
	I0717 22:56:11.119741   54248 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 22:56:07.979850   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:10.471118   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:12.472257   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:11.122047   54248 addons.go:502] enable addons completed in 3.173503334s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 22:56:12.605075   54248 pod_ready.go:92] pod "coredns-5d78c9869d-6ljtn" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.605111   54248 pod_ready.go:81] duration metric: took 1.547984916s waiting for pod "coredns-5d78c9869d-6ljtn" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.605126   54248 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-tq27r" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.619682   54248 pod_ready.go:92] pod "coredns-5d78c9869d-tq27r" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.619710   54248 pod_ready.go:81] duration metric: took 14.576786ms waiting for pod "coredns-5d78c9869d-tq27r" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.619722   54248 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.628850   54248 pod_ready.go:92] pod "etcd-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.628878   54248 pod_ready.go:81] duration metric: took 9.147093ms waiting for pod "etcd-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.628889   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.641360   54248 pod_ready.go:92] pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.641381   54248 pod_ready.go:81] duration metric: took 12.485183ms waiting for pod "kube-apiserver-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.641391   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.656634   54248 pod_ready.go:92] pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:12.656663   54248 pod_ready.go:81] duration metric: took 15.264878ms waiting for pod "kube-controller-manager-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:12.656677   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xjpds" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:14.480168   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:16.969340   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:13.530098   54248 pod_ready.go:92] pod "kube-proxy-xjpds" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:13.530129   54248 pod_ready.go:81] duration metric: took 873.444575ms waiting for pod "kube-proxy-xjpds" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:13.530144   54248 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:13.821592   54248 pod_ready.go:92] pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:13.821615   54248 pod_ready.go:81] duration metric: took 291.46393ms waiting for pod "kube-scheduler-embed-certs-571296" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:13.821625   54248 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:16.228210   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:19.470498   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:21.969531   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:18.228289   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:20.228420   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:22.228472   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:26.250616   54649 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.023698231s)
	I0717 22:56:26.250690   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:26.264095   54649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:56:26.274295   54649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:56:26.284265   54649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:56:26.284332   54649 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 22:56:26.341601   54649 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 22:56:26.341719   54649 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:56:26.507992   54649 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:56:26.508194   54649 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:56:26.508344   54649 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:56:26.684682   54649 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:56:26.686603   54649 out.go:204]   - Generating certificates and keys ...
	I0717 22:56:26.686753   54649 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:56:26.686833   54649 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:56:26.686963   54649 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:56:26.687386   54649 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 22:56:26.687802   54649 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:56:26.688484   54649 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 22:56:26.689007   54649 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 22:56:26.689618   54649 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:56:26.690234   54649 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:56:26.690845   54649 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:56:26.691391   54649 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 22:56:26.691484   54649 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:56:26.793074   54649 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:56:26.956354   54649 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:56:27.033560   54649 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:56:27.222598   54649 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:56:27.242695   54649 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:56:27.243923   54649 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:56:27.244009   54649 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:56:27.382359   54649 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:56:27.385299   54649 out.go:204]   - Booting up control plane ...
	I0717 22:56:27.385459   54649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:56:27.385595   54649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:56:27.385699   54649 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:56:27.386230   54649 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:56:27.388402   54649 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:56:24.469634   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:26.470480   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:24.231654   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:26.728390   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:28.471360   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:30.493443   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:28.728821   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:30.729474   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:32.731419   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:35.894189   54649 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505577 seconds
	I0717 22:56:35.894298   54649 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:56:35.922569   54649 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:56:36.459377   54649 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:56:36.459628   54649 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-504828 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:56:36.981248   54649 kubeadm.go:322] [bootstrap-token] Using token: aq0fl5.e7xnmbjqmeipfdlw
	I0717 22:56:36.983221   54649 out.go:204]   - Configuring RBAC rules ...
	I0717 22:56:36.983401   54649 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:56:37.001576   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:56:37.012679   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:56:37.018002   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:56:37.025356   54649 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:56:37.030822   54649 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:56:37.049741   54649 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:56:37.309822   54649 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:56:37.414906   54649 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:56:37.414947   54649 kubeadm.go:322] 
	I0717 22:56:37.415023   54649 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:56:37.415035   54649 kubeadm.go:322] 
	I0717 22:56:37.415135   54649 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:56:37.415145   54649 kubeadm.go:322] 
	I0717 22:56:37.415190   54649 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:56:37.415290   54649 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:56:37.415373   54649 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:56:37.415383   54649 kubeadm.go:322] 
	I0717 22:56:37.415495   54649 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 22:56:37.415529   54649 kubeadm.go:322] 
	I0717 22:56:37.415593   54649 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:56:37.415602   54649 kubeadm.go:322] 
	I0717 22:56:37.415677   54649 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:56:37.415755   54649 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:56:37.415892   54649 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:56:37.415904   54649 kubeadm.go:322] 
	I0717 22:56:37.416034   54649 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:56:37.416151   54649 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:56:37.416172   54649 kubeadm.go:322] 
	I0717 22:56:37.416306   54649 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token aq0fl5.e7xnmbjqmeipfdlw \
	I0717 22:56:37.416451   54649 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:56:37.416478   54649 kubeadm.go:322] 	--control-plane 
	I0717 22:56:37.416487   54649 kubeadm.go:322] 
	I0717 22:56:37.416596   54649 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:56:37.416606   54649 kubeadm.go:322] 
	I0717 22:56:37.416708   54649 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token aq0fl5.e7xnmbjqmeipfdlw \
	I0717 22:56:37.416850   54649 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:56:37.417385   54649 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:56:37.417413   54649 cni.go:84] Creating CNI manager for ""
	I0717 22:56:37.417426   54649 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:56:37.419367   54649 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:56:37.421047   54649 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:56:37.456430   54649 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:56:37.520764   54649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:56:37.520861   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:37.520877   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=default-k8s-diff-port-504828 minikube.k8s.io/updated_at=2023_07_17T22_56_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:32.970043   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:35.469085   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:35.257714   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:37.730437   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:37.914888   54649 ops.go:34] apiserver oom_adj: -16
	I0717 22:56:37.914920   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:38.508471   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:39.008147   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:39.508371   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:40.008059   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:40.508319   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:41.008945   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:41.507958   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:42.008509   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:42.508920   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:37.969711   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:39.970230   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:42.468790   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:40.227771   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:42.228268   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:43.008542   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:43.508809   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:44.008922   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:44.508771   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:45.008681   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:45.507925   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:46.008078   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:46.508950   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:47.008902   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:47.508705   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:44.470199   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:46.969467   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:44.728843   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:46.729321   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:48.008736   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:48.508008   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:49.008524   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:49.508783   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:50.008620   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:50.508131   54649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:56:50.675484   54649 kubeadm.go:1081] duration metric: took 13.154682677s to wait for elevateKubeSystemPrivileges.
	I0717 22:56:50.675522   54649 kubeadm.go:406] StartCluster complete in 5m29.688096626s
	I0717 22:56:50.675542   54649 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:50.675625   54649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:56:50.678070   54649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:56:50.678358   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:56:50.678397   54649 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:56:50.678485   54649 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-504828"
	I0717 22:56:50.678504   54649 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-504828"
	I0717 22:56:50.678504   54649 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-504828"
	W0717 22:56:50.678515   54649 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:56:50.678526   54649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-504828"
	I0717 22:56:50.678537   54649 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-504828"
	I0717 22:56:50.678557   54649 host.go:66] Checking if "default-k8s-diff-port-504828" exists ...
	I0717 22:56:50.678561   54649 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-504828"
	W0717 22:56:50.678571   54649 addons.go:240] addon metrics-server should already be in state true
	I0717 22:56:50.678630   54649 host.go:66] Checking if "default-k8s-diff-port-504828" exists ...
	I0717 22:56:50.678570   54649 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:56:50.678961   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.678995   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.679011   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.679039   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.678962   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.679094   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.696229   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0717 22:56:50.696669   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.697375   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.697414   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.697831   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.698436   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.698474   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.698998   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
	I0717 22:56:50.699168   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41135
	I0717 22:56:50.699382   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.699530   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.699812   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.699824   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.700021   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.700044   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.700219   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.700385   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.700570   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.700748   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.700785   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.715085   54649 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-504828"
	W0717 22:56:50.715119   54649 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:56:50.715149   54649 host.go:66] Checking if "default-k8s-diff-port-504828" exists ...
	I0717 22:56:50.715547   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.715580   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.715831   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41743
	I0717 22:56:50.716347   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.716905   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.716921   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.717285   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.717334   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41035
	I0717 22:56:50.717493   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.717699   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.718238   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.718257   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.718580   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.718843   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.719486   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:56:50.721699   54649 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:56:50.723464   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:56:50.723484   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:56:50.720832   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:56:50.723509   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:56:50.725600   54649 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:56:50.728061   54649 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:50.726758   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.727455   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:56:50.728105   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:56:50.728133   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:56:50.728134   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:56:50.728166   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.728380   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:56:50.728785   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:56:50.728938   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:56:50.731891   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.732348   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:56:50.732379   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.732589   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:56:50.732793   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:56:50.732974   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:56:50.733113   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:56:50.741098   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0717 22:56:50.741744   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.742386   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.742410   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.742968   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.743444   54649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:56:50.743490   54649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:56:50.759985   54649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38185
	I0717 22:56:50.760547   54649 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:56:50.761145   54649 main.go:141] libmachine: Using API Version  1
	I0717 22:56:50.761171   54649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:56:50.761598   54649 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:56:50.761779   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetState
	I0717 22:56:50.763276   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .DriverName
	I0717 22:56:50.763545   54649 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:50.763559   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:56:50.763574   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHHostname
	I0717 22:56:50.766525   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.766964   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:6f:f7", ip: ""} in network mk-default-k8s-diff-port-504828: {Iface:virbr4 ExpiryTime:2023-07-17 23:51:06 +0000 UTC Type:0 Mac:52:54:00:28:6f:f7 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:default-k8s-diff-port-504828 Clientid:01:52:54:00:28:6f:f7}
	I0717 22:56:50.766995   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | domain default-k8s-diff-port-504828 has defined IP address 192.168.72.118 and MAC address 52:54:00:28:6f:f7 in network mk-default-k8s-diff-port-504828
	I0717 22:56:50.767254   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHPort
	I0717 22:56:50.767444   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHKeyPath
	I0717 22:56:50.767636   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .GetSSHUsername
	I0717 22:56:50.767803   54649 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/default-k8s-diff-port-504828/id_rsa Username:docker}
	I0717 22:56:50.963671   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:56:50.963698   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:56:50.982828   54649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:56:50.985884   54649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:56:50.989077   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:56:51.020140   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:56:51.020174   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:56:51.094548   54649 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:51.094574   54649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:56:51.185896   54649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:56:51.238666   54649 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-504828" context rescaled to 1 replicas
	I0717 22:56:51.238704   54649 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.118 Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:56:51.241792   54649 out.go:177] * Verifying Kubernetes components...
	I0717 22:56:51.243720   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:56:49.470925   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:51.970366   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:48.732421   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:50.742608   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:52.980991   54649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.998121603s)
	I0717 22:56:52.981060   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:52.981078   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:52.981422   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:52.981424   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:52.981460   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:52.981472   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:52.981486   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:52.981815   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:52.981906   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:52.981923   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:52.981962   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:52.981979   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:52.982328   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:52.982335   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:52.982352   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.384207   54649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.398283926s)
	I0717 22:56:53.384259   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.384263   54649 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.39515958s)
	I0717 22:56:53.384272   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.384280   54649 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0717 22:56:53.384588   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:53.384664   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.384680   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.384694   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.384711   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.385419   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.385438   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.385446   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:53.810615   54649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.624668019s)
	I0717 22:56:53.810613   54649 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.5668435s)
	I0717 22:56:53.810690   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.810712   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.810717   54649 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-504828" to be "Ready" ...
	I0717 22:56:53.811092   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) DBG | Closing plugin on server side
	I0717 22:56:53.811172   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.811191   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.811209   54649 main.go:141] libmachine: Making call to close driver server
	I0717 22:56:53.811223   54649 main.go:141] libmachine: (default-k8s-diff-port-504828) Calling .Close
	I0717 22:56:53.811501   54649 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:56:53.811519   54649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:56:53.811529   54649 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-504828"
	I0717 22:56:53.813588   54649 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 22:56:53.815209   54649 addons.go:502] enable addons completed in 3.136812371s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 22:56:53.848049   54649 node_ready.go:49] node "default-k8s-diff-port-504828" has status "Ready":"True"
	I0717 22:56:53.848070   54649 node_ready.go:38] duration metric: took 37.336626ms waiting for node "default-k8s-diff-port-504828" to be "Ready" ...
	I0717 22:56:53.848078   54649 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:56:53.869392   54649 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-rqcjj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.922409   54649 pod_ready.go:92] pod "coredns-5d78c9869d-rqcjj" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.922433   54649 pod_ready.go:81] duration metric: took 2.05301467s waiting for pod "coredns-5d78c9869d-rqcjj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.922442   54649 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-xz4mj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.930140   54649 pod_ready.go:92] pod "coredns-5d78c9869d-xz4mj" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.930162   54649 pod_ready.go:81] duration metric: took 7.714745ms waiting for pod "coredns-5d78c9869d-xz4mj" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.930171   54649 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.938968   54649 pod_ready.go:92] pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.938994   54649 pod_ready.go:81] duration metric: took 8.813777ms waiting for pod "etcd-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.939006   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.950100   54649 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.950127   54649 pod_ready.go:81] duration metric: took 11.110719ms waiting for pod "kube-apiserver-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.950141   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.956205   54649 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:55.956228   54649 pod_ready.go:81] duration metric: took 6.078268ms waiting for pod "kube-controller-manager-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:55.956240   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmtc8" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.318975   54649 pod_ready.go:92] pod "kube-proxy-nmtc8" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:56.319002   54649 pod_ready.go:81] duration metric: took 362.754902ms waiting for pod "kube-proxy-nmtc8" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.319012   54649 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.725010   54649 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace has status "Ready":"True"
	I0717 22:56:56.725042   54649 pod_ready.go:81] duration metric: took 406.022192ms waiting for pod "kube-scheduler-default-k8s-diff-port-504828" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:56.725059   54649 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace to be "Ready" ...
	I0717 22:56:53.971176   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:56.468730   53870 pod_ready.go:102] pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:57.063020   53870 pod_ready.go:81] duration metric: took 4m0.001070587s waiting for pod "metrics-server-74d5856cc6-cmknj" in "kube-system" namespace to be "Ready" ...
	E0717 22:56:57.063061   53870 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 22:56:57.063088   53870 pod_ready.go:38] duration metric: took 4m1.198793286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:56:57.063114   53870 kubeadm.go:640] restartCluster took 5m14.33125167s
	W0717 22:56:57.063164   53870 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 22:56:57.063188   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 22:56:53.230170   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:55.230713   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:57.729746   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:59.128445   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:01.628013   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:56:59.730555   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:02.228533   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:03.628469   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:06.127096   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:04.228878   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:06.229004   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:08.128257   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:10.128530   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:12.128706   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:10.086799   53870 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.023585108s)
	I0717 22:57:10.086877   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:57:10.102476   53870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:57:10.112904   53870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:57:10.123424   53870 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:57:10.123471   53870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0717 22:57:10.352747   53870 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:57:08.232655   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:10.730595   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:14.129308   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:16.627288   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:13.230023   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:15.730720   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:18.628332   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:20.629305   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:18.227910   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:20.228411   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:22.230069   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:23.708206   53870 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 22:57:23.708283   53870 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:57:23.708382   53870 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:57:23.708529   53870 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:57:23.708651   53870 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:57:23.708789   53870 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:57:23.708916   53870 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:57:23.708988   53870 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 22:57:23.709078   53870 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:57:23.710652   53870 out.go:204]   - Generating certificates and keys ...
	I0717 22:57:23.710759   53870 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:57:23.710840   53870 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:57:23.710959   53870 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 22:57:23.711058   53870 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 22:57:23.711156   53870 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 22:57:23.711234   53870 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 22:57:23.711314   53870 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 22:57:23.711415   53870 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 22:57:23.711522   53870 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 22:57:23.711635   53870 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 22:57:23.711697   53870 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 22:57:23.711776   53870 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:57:23.711831   53870 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:57:23.711892   53870 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:57:23.711978   53870 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:57:23.712048   53870 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:57:23.712136   53870 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:57:23.713799   53870 out.go:204]   - Booting up control plane ...
	I0717 22:57:23.713909   53870 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:57:23.714033   53870 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:57:23.714145   53870 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:57:23.714268   53870 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:57:23.714418   53870 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:57:23.714483   53870 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004162 seconds
	I0717 22:57:23.714656   53870 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:57:23.714846   53870 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:57:23.714929   53870 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:57:23.715088   53870 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-332820 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 22:57:23.715170   53870 kubeadm.go:322] [bootstrap-token] Using token: sjemvm.5nuhmbx5uh7jm9fo
	I0717 22:57:23.716846   53870 out.go:204]   - Configuring RBAC rules ...
	I0717 22:57:23.716937   53870 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:57:23.717067   53870 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:57:23.717210   53870 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:57:23.717333   53870 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:57:23.717414   53870 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:57:23.717456   53870 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:57:23.717494   53870 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:57:23.717501   53870 kubeadm.go:322] 
	I0717 22:57:23.717564   53870 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:57:23.717571   53870 kubeadm.go:322] 
	I0717 22:57:23.717636   53870 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:57:23.717641   53870 kubeadm.go:322] 
	I0717 22:57:23.717662   53870 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:57:23.717733   53870 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:57:23.717783   53870 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:57:23.717791   53870 kubeadm.go:322] 
	I0717 22:57:23.717839   53870 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:57:23.717946   53870 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:57:23.718040   53870 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:57:23.718052   53870 kubeadm.go:322] 
	I0717 22:57:23.718172   53870 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0717 22:57:23.718289   53870 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:57:23.718299   53870 kubeadm.go:322] 
	I0717 22:57:23.718373   53870 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sjemvm.5nuhmbx5uh7jm9fo \
	I0717 22:57:23.718476   53870 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb \
	I0717 22:57:23.718525   53870 kubeadm.go:322]     --control-plane 	  
	I0717 22:57:23.718539   53870 kubeadm.go:322] 
	I0717 22:57:23.718624   53870 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:57:23.718631   53870 kubeadm.go:322] 
	I0717 22:57:23.718703   53870 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sjemvm.5nuhmbx5uh7jm9fo \
	I0717 22:57:23.718812   53870 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b20cc9eba3bf0e434eb130babeb3ad86c31985ed5c62e5292b16caea113a4eb 
	I0717 22:57:23.718825   53870 cni.go:84] Creating CNI manager for ""
	I0717 22:57:23.718834   53870 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 22:57:23.720891   53870 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 22:57:23.128941   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:25.129405   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:27.129595   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:23.722935   53870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 22:57:23.738547   53870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 22:57:23.764002   53870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:57:23.764109   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=old-k8s-version-332820 minikube.k8s.io/updated_at=2023_07_17T22_57_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:23.764127   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:23.835900   53870 ops.go:34] apiserver oom_adj: -16
	I0717 22:57:24.015975   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:24.622866   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:25.122754   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:25.622733   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:26.123442   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:26.623190   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:27.123191   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:27.622408   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:24.729678   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:26.730278   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:29.629588   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:32.130357   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:28.122555   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:28.622771   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:29.122717   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:29.622760   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:30.123186   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:30.622731   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:31.122724   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:31.622957   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:32.122775   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:32.622552   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:29.228462   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:31.232382   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:34.629160   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:37.128209   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:33.122703   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:33.623262   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:34.122574   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:34.623130   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:35.122819   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:35.622426   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:36.123262   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:36.622474   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:37.122820   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:37.623414   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:33.244514   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:35.735391   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:38.123076   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:38.622497   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:39.122826   53870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:57:39.220042   53870 kubeadm.go:1081] duration metric: took 15.45599881s to wait for elevateKubeSystemPrivileges.
	I0717 22:57:39.220076   53870 kubeadm.go:406] StartCluster complete in 5m56.5464295s
	I0717 22:57:39.220095   53870 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:57:39.220173   53870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:57:39.221940   53870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:57:39.222201   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:57:39.222371   53870 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:57:39.222458   53870 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-332820"
	I0717 22:57:39.222474   53870 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-332820"
	W0717 22:57:39.222486   53870 addons.go:240] addon storage-provisioner should already be in state true
	I0717 22:57:39.222517   53870 config.go:182] Loaded profile config "old-k8s-version-332820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 22:57:39.222533   53870 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-332820"
	I0717 22:57:39.222544   53870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-332820"
	I0717 22:57:39.222528   53870 host.go:66] Checking if "old-k8s-version-332820" exists ...
	I0717 22:57:39.222906   53870 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-332820"
	I0717 22:57:39.222947   53870 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-332820"
	I0717 22:57:39.222955   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.222965   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.222978   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.222989   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0717 22:57:39.222958   53870 addons.go:240] addon metrics-server should already be in state true
	I0717 22:57:39.223266   53870 host.go:66] Checking if "old-k8s-version-332820" exists ...
	I0717 22:57:39.223611   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.223644   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.241834   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0717 22:57:39.242161   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0717 22:57:39.242290   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0717 22:57:39.242409   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.242525   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.242699   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.242983   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.242995   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.243079   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.243085   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.243146   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.243152   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.243455   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.243499   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.243923   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.243955   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.244114   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.244145   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.244609   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.244636   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.264113   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38423
	I0717 22:57:39.264664   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.265196   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.265217   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.265738   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.265990   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.267754   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:57:39.269600   53870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:57:39.269649   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37175
	I0717 22:57:39.271155   53870 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:57:39.271170   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:57:39.271196   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:57:39.271008   53870 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-332820"
	W0717 22:57:39.271246   53870 addons.go:240] addon default-storageclass should already be in state true
	I0717 22:57:39.271278   53870 host.go:66] Checking if "old-k8s-version-332820" exists ...
	I0717 22:57:39.271539   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.271564   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.271582   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.272088   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.272112   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.272450   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.272628   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.275001   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:57:39.276178   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.276580   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:57:39.276603   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.276866   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:57:39.277046   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:57:39.277173   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:57:39.277284   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:57:39.279594   53870 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 22:57:39.281161   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 22:57:39.281178   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 22:57:39.281197   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:57:39.284664   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.285093   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:57:39.285126   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.285323   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:57:39.285486   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:57:39.285624   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:57:39.285731   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:57:39.291470   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0717 22:57:39.291955   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.292486   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.292509   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.292896   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.293409   53870 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:57:39.293446   53870 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:57:39.310134   53870 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35331
	I0717 22:57:39.310626   53870 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:57:39.311202   53870 main.go:141] libmachine: Using API Version  1
	I0717 22:57:39.311227   53870 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:57:39.311758   53870 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:57:39.311947   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetState
	I0717 22:57:39.314218   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .DriverName
	I0717 22:57:39.314495   53870 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:57:39.314506   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:57:39.314520   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHHostname
	I0717 22:57:39.317813   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.321612   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHPort
	I0717 22:57:39.321659   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:ca:1a", ip: ""} in network mk-old-k8s-version-332820: {Iface:virbr2 ExpiryTime:2023-07-17 23:51:25 +0000 UTC Type:0 Mac:52:54:00:46:ca:1a Iaid: IPaddr:192.168.50.149 Prefix:24 Hostname:old-k8s-version-332820 Clientid:01:52:54:00:46:ca:1a}
	I0717 22:57:39.321685   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | domain old-k8s-version-332820 has defined IP address 192.168.50.149 and MAC address 52:54:00:46:ca:1a in network mk-old-k8s-version-332820
	I0717 22:57:39.321771   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHKeyPath
	I0717 22:57:39.321872   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .GetSSHUsername
	I0717 22:57:39.321963   53870 sshutil.go:53] new ssh client: &{IP:192.168.50.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/old-k8s-version-332820/id_rsa Username:docker}
	I0717 22:57:39.410805   53870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:57:39.448115   53870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:57:39.468015   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 22:57:39.468044   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 22:57:39.510209   53870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:57:39.542977   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 22:57:39.543006   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 22:57:39.621799   53870 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:57:39.621830   53870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 22:57:39.695813   53870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 22:57:39.820255   53870 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-332820" context rescaled to 1 replicas
	I0717 22:57:39.820293   53870 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.149 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:57:39.822441   53870 out.go:177] * Verifying Kubernetes components...
	I0717 22:57:39.824136   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:57:40.366843   53870 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0717 22:57:40.692359   53870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.244194312s)
	I0717 22:57:40.692412   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692417   53870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18217225s)
	I0717 22:57:40.692451   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692463   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.692427   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.692926   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.692941   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.692955   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.692961   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.692966   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692971   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.692977   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.692982   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.692993   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.693346   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.693347   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.693360   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.693377   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.693379   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.693390   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:40.693391   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:40.693402   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:40.693727   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:40.695361   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:40.695382   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:41.360399   53870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.664534201s)
	I0717 22:57:41.360444   53870 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.536280934s)
	I0717 22:57:41.360477   53870 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-332820" to be "Ready" ...
	I0717 22:57:41.360484   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:41.360603   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:41.360912   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:41.360959   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:41.360976   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:41.360986   53870 main.go:141] libmachine: Making call to close driver server
	I0717 22:57:41.361000   53870 main.go:141] libmachine: (old-k8s-version-332820) Calling .Close
	I0717 22:57:41.361267   53870 main.go:141] libmachine: (old-k8s-version-332820) DBG | Closing plugin on server side
	I0717 22:57:41.361323   53870 main.go:141] libmachine: Successfully made call to close driver server
	I0717 22:57:41.361335   53870 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 22:57:41.361350   53870 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-332820"
	I0717 22:57:41.364209   53870 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 22:57:39.128970   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:41.129335   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:41.365698   53870 addons.go:502] enable addons completed in 2.143322329s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 22:57:41.370307   53870 node_ready.go:49] node "old-k8s-version-332820" has status "Ready":"True"
	I0717 22:57:41.370334   53870 node_ready.go:38] duration metric: took 9.838563ms waiting for node "old-k8s-version-332820" to be "Ready" ...
	I0717 22:57:41.370345   53870 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:57:41.477919   53870 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:38.229186   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:40.229347   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:42.730552   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:43.627986   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:46.126930   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:43.515865   53870 pod_ready.go:102] pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:44.011451   53870 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-pjn9n" not found
	I0717 22:57:44.011475   53870 pod_ready.go:81] duration metric: took 2.533523466s waiting for pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace to be "Ready" ...
	E0717 22:57:44.011483   53870 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-pjn9n" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-pjn9n" not found
	I0717 22:57:44.011490   53870 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:46.023775   53870 pod_ready.go:102] pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:45.229105   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:47.727715   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:48.128141   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:50.628216   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:48.523241   53870 pod_ready.go:102] pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:50.024098   53870 pod_ready.go:92] pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:50.024118   53870 pod_ready.go:81] duration metric: took 6.012622912s waiting for pod "coredns-5644d7b6d9-t4d2t" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:50.024129   53870 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dpnlw" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:50.029960   53870 pod_ready.go:92] pod "kube-proxy-dpnlw" in "kube-system" namespace has status "Ready":"True"
	I0717 22:57:50.029976   53870 pod_ready.go:81] duration metric: took 5.842404ms waiting for pod "kube-proxy-dpnlw" in "kube-system" namespace to be "Ready" ...
	I0717 22:57:50.029985   53870 pod_ready.go:38] duration metric: took 8.659630061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:57:50.029998   53870 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:57:50.030036   53870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:57:50.046609   53870 api_server.go:72] duration metric: took 10.226287152s to wait for apiserver process to appear ...
	I0717 22:57:50.046634   53870 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:57:50.046654   53870 api_server.go:253] Checking apiserver healthz at https://192.168.50.149:8443/healthz ...
	I0717 22:57:50.053143   53870 api_server.go:279] https://192.168.50.149:8443/healthz returned 200:
	ok
	I0717 22:57:50.054242   53870 api_server.go:141] control plane version: v1.16.0
	I0717 22:57:50.054259   53870 api_server.go:131] duration metric: took 7.618888ms to wait for apiserver health ...
	I0717 22:57:50.054265   53870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:57:50.059517   53870 system_pods.go:59] 4 kube-system pods found
	I0717 22:57:50.059537   53870 system_pods.go:61] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.059542   53870 system_pods.go:61] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.059550   53870 system_pods.go:61] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.059559   53870 system_pods.go:61] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.059567   53870 system_pods.go:74] duration metric: took 5.296559ms to wait for pod list to return data ...
	I0717 22:57:50.059575   53870 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:57:50.062619   53870 default_sa.go:45] found service account: "default"
	I0717 22:57:50.062636   53870 default_sa.go:55] duration metric: took 3.055001ms for default service account to be created ...
	I0717 22:57:50.062643   53870 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:57:50.066927   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:50.066960   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.066969   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.066978   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.066987   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.067003   53870 retry.go:31] will retry after 260.087226ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:50.331854   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:50.331881   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.331886   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.331893   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.331899   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.331914   53870 retry.go:31] will retry after 352.733578ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:50.689437   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:50.689470   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:50.689478   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:50.689489   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:50.689497   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:50.689531   53870 retry.go:31] will retry after 448.974584ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:51.144027   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:51.144052   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:51.144057   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:51.144064   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:51.144072   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:51.144084   53870 retry.go:31] will retry after 388.759143ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:51.538649   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:51.538681   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:51.538690   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:51.538701   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:51.538709   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:51.538726   53870 retry.go:31] will retry after 516.772578ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:52.061223   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:52.061251   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:52.061257   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:52.061264   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:52.061270   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:52.061284   53870 retry.go:31] will retry after 640.645684ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:52.706812   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:52.706841   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:52.706848   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:52.706857   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:52.706865   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:52.706881   53870 retry.go:31] will retry after 800.353439ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:49.728135   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:51.729859   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:53.128948   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:55.628153   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:53.512660   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:53.512702   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:53.512710   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:53.512720   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:53.512729   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:53.512746   53870 retry.go:31] will retry after 1.135974065s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:54.653983   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:54.654008   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:54.654013   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:54.654021   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:54.654027   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:54.654040   53870 retry.go:31] will retry after 1.807970353s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:56.466658   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:56.466685   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:56.466690   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:56.466697   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:56.466703   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:56.466717   53870 retry.go:31] will retry after 1.738235237s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:53.729966   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:56.229195   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:58.130852   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:00.627290   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:57:58.210259   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:57:58.210286   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:57:58.210291   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:57:58.210298   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:57:58.210304   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:57:58.210318   53870 retry.go:31] will retry after 2.588058955s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:00.805164   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:00.805189   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:00.805195   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:00.805204   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:00.805212   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:00.805229   53870 retry.go:31] will retry after 2.395095199s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:57:58.230452   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:00.730302   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:02.627408   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:05.127023   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:03.205614   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:03.205641   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:03.205646   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:03.205654   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:03.205661   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:03.205673   53870 retry.go:31] will retry after 3.552797061s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:06.765112   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:06.765169   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:06.765189   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:06.765202   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:06.765211   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:06.765229   53870 retry.go:31] will retry after 3.62510644s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:03.229254   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:05.729500   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:07.627727   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:10.127545   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:10.396156   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:10.396185   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:10.396193   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:10.396202   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:10.396210   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:10.396234   53870 retry.go:31] will retry after 7.05504218s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:08.230115   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:10.729252   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:12.729814   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:12.627688   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:14.629102   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:17.126975   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:17.458031   53870 system_pods.go:86] 4 kube-system pods found
	I0717 22:58:17.458055   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:17.458060   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:17.458067   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:17.458072   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:17.458085   53870 retry.go:31] will retry after 7.079137896s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0717 22:58:15.228577   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:17.229657   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:19.127827   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:21.627879   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:19.733111   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:22.229170   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:24.128551   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:26.627380   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:24.542750   53870 system_pods.go:86] 5 kube-system pods found
	I0717 22:58:24.542779   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:24.542785   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:58:24.542789   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:24.542796   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:24.542801   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:24.542814   53870 retry.go:31] will retry after 10.245831604s: missing components: etcd, kube-apiserver, kube-scheduler
	I0717 22:58:24.729548   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:27.228785   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:28.627425   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:30.627791   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:29.728922   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:31.729450   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:32.628481   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:35.127509   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:37.128620   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:34.794623   53870 system_pods.go:86] 6 kube-system pods found
	I0717 22:58:34.794652   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:34.794658   53870 system_pods.go:89] "kube-apiserver-old-k8s-version-332820" [a92ec810-2496-4702-96cb-b99972aa0907] Running
	I0717 22:58:34.794662   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:58:34.794666   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:34.794673   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:34.794678   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:34.794692   53870 retry.go:31] will retry after 13.54688256s: missing components: etcd, kube-scheduler
	I0717 22:58:33.732071   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:36.230099   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:39.627130   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:41.628484   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:38.230167   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:40.728553   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:42.730438   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:44.129730   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:46.130222   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:45.228042   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:47.230684   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:48.627207   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:51.127809   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:48.348380   53870 system_pods.go:86] 8 kube-system pods found
	I0717 22:58:48.348409   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:58:48.348415   53870 system_pods.go:89] "etcd-old-k8s-version-332820" [2182326c-a489-44f6-a2bb-4d238d500cd4] Pending
	I0717 22:58:48.348419   53870 system_pods.go:89] "kube-apiserver-old-k8s-version-332820" [a92ec810-2496-4702-96cb-b99972aa0907] Running
	I0717 22:58:48.348424   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:58:48.348429   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:58:48.348433   53870 system_pods.go:89] "kube-scheduler-old-k8s-version-332820" [6145ebf3-1505-4eee-be83-b473b2d6eb16] Running
	I0717 22:58:48.348440   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:58:48.348448   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:58:48.348460   53870 retry.go:31] will retry after 11.748298579s: missing components: etcd
	I0717 22:58:49.730893   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:51.731624   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:53.131814   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:55.628315   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:54.229398   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:56.232954   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:00.104576   53870 system_pods.go:86] 8 kube-system pods found
	I0717 22:59:00.104603   53870 system_pods.go:89] "coredns-5644d7b6d9-t4d2t" [5e8166c1-8b07-4eca-9d2a-51d2142e7c08] Running
	I0717 22:59:00.104609   53870 system_pods.go:89] "etcd-old-k8s-version-332820" [2182326c-a489-44f6-a2bb-4d238d500cd4] Running
	I0717 22:59:00.104613   53870 system_pods.go:89] "kube-apiserver-old-k8s-version-332820" [a92ec810-2496-4702-96cb-b99972aa0907] Running
	I0717 22:59:00.104618   53870 system_pods.go:89] "kube-controller-manager-old-k8s-version-332820" [6178a2db-2800-4689-bb29-5fd220cf3560] Running
	I0717 22:59:00.104622   53870 system_pods.go:89] "kube-proxy-dpnlw" [eb78806d-3e64-4d07-a9d5-6bebaa1abe2d] Running
	I0717 22:59:00.104626   53870 system_pods.go:89] "kube-scheduler-old-k8s-version-332820" [6145ebf3-1505-4eee-be83-b473b2d6eb16] Running
	I0717 22:59:00.104632   53870 system_pods.go:89] "metrics-server-74d5856cc6-59wx5" [3ddd38f4-fe18-4e49-bff3-f8f73a688b98] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 22:59:00.104638   53870 system_pods.go:89] "storage-provisioner" [7158a1e3-713a-4702-b1d8-3553d7dfa0de] Running
	I0717 22:59:00.104646   53870 system_pods.go:126] duration metric: took 1m10.041998574s to wait for k8s-apps to be running ...
	I0717 22:59:00.104654   53870 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:59:00.104712   53870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:59:00.127311   53870 system_svc.go:56] duration metric: took 22.647393ms WaitForService to wait for kubelet.
	I0717 22:59:00.127340   53870 kubeadm.go:581] duration metric: took 1m20.307024254s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:59:00.127365   53870 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:59:00.131417   53870 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 22:59:00.131440   53870 node_conditions.go:123] node cpu capacity is 2
	I0717 22:59:00.131451   53870 node_conditions.go:105] duration metric: took 4.081643ms to run NodePressure ...
	I0717 22:59:00.131462   53870 start.go:228] waiting for startup goroutines ...
	I0717 22:59:00.131468   53870 start.go:233] waiting for cluster config update ...
	I0717 22:59:00.131478   53870 start.go:242] writing updated cluster config ...
	I0717 22:59:00.131776   53870 ssh_runner.go:195] Run: rm -f paused
	I0717 22:59:00.183048   53870 start.go:578] kubectl: 1.27.3, cluster: 1.16.0 (minor skew: 11)
	I0717 22:59:00.184945   53870 out.go:177] 
	W0717 22:59:00.186221   53870 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.16.0.
	I0717 22:59:00.187477   53870 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0717 22:59:00.188679   53870 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-332820" cluster and "default" namespace by default
	I0717 22:58:57.628894   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:59.629684   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:02.128694   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:58:58.730891   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:00.731091   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:04.627812   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:06.628434   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:03.230847   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:05.728807   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:07.728897   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:08.630065   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:11.128988   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:09.729866   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:12.229160   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:13.627995   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:16.128000   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:14.728745   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:16.733743   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:18.131709   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:20.628704   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:19.234979   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:21.730483   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:22.629821   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:25.127417   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:27.127827   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:24.229123   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:26.728729   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:29.629594   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:32.126711   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:28.729318   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:30.729924   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:32.731713   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:34.627629   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:37.128939   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:35.228008   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:37.233675   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:39.628990   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:41.629614   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:39.729052   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:41.730060   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:44.127514   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:46.128048   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:44.228115   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:46.229857   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:48.128761   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:50.631119   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:48.728917   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:50.730222   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:52.731295   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:53.127276   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:55.127950   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:57.128481   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:55.228655   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:57.228813   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:59.626761   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:01.628045   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 22:59:59.229493   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:01.230143   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:04.127371   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:06.128098   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:03.728770   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:06.228708   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:08.128197   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:10.626883   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:08.229060   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:10.727573   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:12.730410   54248 pod_ready.go:102] pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:12.628273   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:14.629361   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:17.127148   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:13.822400   54248 pod_ready.go:81] duration metric: took 4m0.000761499s waiting for pod "metrics-server-74d5c6b9c-cknmm" in "kube-system" namespace to be "Ready" ...
	E0717 23:00:13.822430   54248 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 23:00:13.822438   54248 pod_ready.go:38] duration metric: took 4m2.778910042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 23:00:13.822455   54248 api_server.go:52] waiting for apiserver process to appear ...
	I0717 23:00:13.822482   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:13.822546   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:13.868846   54248 cri.go:89] found id: "50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:13.868873   54248 cri.go:89] found id: ""
	I0717 23:00:13.868884   54248 logs.go:284] 1 containers: [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c]
	I0717 23:00:13.868951   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.873997   54248 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:13.874077   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:13.904386   54248 cri.go:89] found id: "e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:13.904415   54248 cri.go:89] found id: ""
	I0717 23:00:13.904425   54248 logs.go:284] 1 containers: [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2]
	I0717 23:00:13.904486   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.909075   54248 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:13.909127   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:13.940628   54248 cri.go:89] found id: "828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:13.940657   54248 cri.go:89] found id: ""
	I0717 23:00:13.940667   54248 logs.go:284] 1 containers: [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594]
	I0717 23:00:13.940721   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.945076   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:13.945132   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:13.976589   54248 cri.go:89] found id: "a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:13.976612   54248 cri.go:89] found id: ""
	I0717 23:00:13.976621   54248 logs.go:284] 1 containers: [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e]
	I0717 23:00:13.976684   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:13.981163   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:13.981231   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:14.018277   54248 cri.go:89] found id: "5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:14.018298   54248 cri.go:89] found id: ""
	I0717 23:00:14.018308   54248 logs.go:284] 1 containers: [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea]
	I0717 23:00:14.018370   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:14.022494   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:14.022557   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:14.055302   54248 cri.go:89] found id: "0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:14.055327   54248 cri.go:89] found id: ""
	I0717 23:00:14.055336   54248 logs.go:284] 1 containers: [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135]
	I0717 23:00:14.055388   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:14.059980   54248 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:14.060041   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:14.092467   54248 cri.go:89] found id: ""
	I0717 23:00:14.092495   54248 logs.go:284] 0 containers: []
	W0717 23:00:14.092505   54248 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:14.092512   54248 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:14.092570   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:14.127348   54248 cri.go:89] found id: "9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:14.127370   54248 cri.go:89] found id: ""
	I0717 23:00:14.127383   54248 logs.go:284] 1 containers: [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87]
	I0717 23:00:14.127438   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:14.132646   54248 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:14.132673   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:14.147882   54248 logs.go:123] Gathering logs for kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] ...
	I0717 23:00:14.147911   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:14.198417   54248 logs.go:123] Gathering logs for etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] ...
	I0717 23:00:14.198466   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:14.244734   54248 logs.go:123] Gathering logs for kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] ...
	I0717 23:00:14.244775   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:14.287920   54248 logs.go:123] Gathering logs for storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] ...
	I0717 23:00:14.287956   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:14.333785   54248 logs.go:123] Gathering logs for container status ...
	I0717 23:00:14.333820   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:14.378892   54248 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:14.378930   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 23:00:14.482292   54248 logs.go:123] Gathering logs for coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] ...
	I0717 23:00:14.482332   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:14.525418   54248 logs.go:123] Gathering logs for kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] ...
	I0717 23:00:14.525445   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:14.562013   54248 logs.go:123] Gathering logs for kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] ...
	I0717 23:00:14.562050   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:14.609917   54248 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:14.609955   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:15.088465   54248 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:15.088502   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:17.743963   54248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:00:17.761437   54248 api_server.go:72] duration metric: took 4m9.176341685s to wait for apiserver process to appear ...
	I0717 23:00:17.761464   54248 api_server.go:88] waiting for apiserver healthz status ...
	I0717 23:00:17.761499   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:17.761569   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:17.796097   54248 cri.go:89] found id: "50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:17.796126   54248 cri.go:89] found id: ""
	I0717 23:00:17.796136   54248 logs.go:284] 1 containers: [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c]
	I0717 23:00:17.796194   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.800256   54248 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:17.800318   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:17.830519   54248 cri.go:89] found id: "e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:17.830540   54248 cri.go:89] found id: ""
	I0717 23:00:17.830549   54248 logs.go:284] 1 containers: [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2]
	I0717 23:00:17.830597   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.835086   54248 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:17.835158   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:17.869787   54248 cri.go:89] found id: "828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:17.869810   54248 cri.go:89] found id: ""
	I0717 23:00:17.869817   54248 logs.go:284] 1 containers: [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594]
	I0717 23:00:17.869865   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.874977   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:17.875042   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:17.906026   54248 cri.go:89] found id: "a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:17.906060   54248 cri.go:89] found id: ""
	I0717 23:00:17.906070   54248 logs.go:284] 1 containers: [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e]
	I0717 23:00:17.906130   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.912549   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:17.912619   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:17.945804   54248 cri.go:89] found id: "5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:17.945832   54248 cri.go:89] found id: ""
	I0717 23:00:17.945842   54248 logs.go:284] 1 containers: [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea]
	I0717 23:00:17.945892   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:17.950115   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:17.950170   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:17.980790   54248 cri.go:89] found id: "0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:17.980816   54248 cri.go:89] found id: ""
	I0717 23:00:17.980825   54248 logs.go:284] 1 containers: [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135]
	I0717 23:00:17.980893   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:19.127901   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:21.628419   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:17.985352   54248 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:17.987262   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:18.019763   54248 cri.go:89] found id: ""
	I0717 23:00:18.019794   54248 logs.go:284] 0 containers: []
	W0717 23:00:18.019804   54248 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:18.019812   54248 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:18.019875   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:18.052106   54248 cri.go:89] found id: "9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:18.052135   54248 cri.go:89] found id: ""
	I0717 23:00:18.052144   54248 logs.go:284] 1 containers: [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87]
	I0717 23:00:18.052192   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:18.057066   54248 logs.go:123] Gathering logs for kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] ...
	I0717 23:00:18.057093   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:18.100637   54248 logs.go:123] Gathering logs for kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] ...
	I0717 23:00:18.100672   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:18.137149   54248 logs.go:123] Gathering logs for kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] ...
	I0717 23:00:18.137176   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:18.191633   54248 logs.go:123] Gathering logs for storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] ...
	I0717 23:00:18.191679   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:18.231765   54248 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:18.231798   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:18.250030   54248 logs.go:123] Gathering logs for kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] ...
	I0717 23:00:18.250061   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:18.312833   54248 logs.go:123] Gathering logs for etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] ...
	I0717 23:00:18.312881   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:18.357152   54248 logs.go:123] Gathering logs for coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] ...
	I0717 23:00:18.357190   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:18.388834   54248 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:18.388871   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 23:00:18.491866   54248 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:18.491898   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:18.638732   54248 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:18.638761   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:19.135753   54248 logs.go:123] Gathering logs for container status ...
	I0717 23:00:19.135788   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:21.678446   54248 api_server.go:253] Checking apiserver healthz at https://192.168.61.179:8443/healthz ...
	I0717 23:00:21.684484   54248 api_server.go:279] https://192.168.61.179:8443/healthz returned 200:
	ok
	I0717 23:00:21.686359   54248 api_server.go:141] control plane version: v1.27.3
	I0717 23:00:21.686385   54248 api_server.go:131] duration metric: took 3.924913504s to wait for apiserver health ...
	I0717 23:00:21.686395   54248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 23:00:21.686420   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:21.686476   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:21.720978   54248 cri.go:89] found id: "50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:21.721002   54248 cri.go:89] found id: ""
	I0717 23:00:21.721012   54248 logs.go:284] 1 containers: [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c]
	I0717 23:00:21.721070   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.726790   54248 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:21.726860   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:21.756975   54248 cri.go:89] found id: "e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:21.757001   54248 cri.go:89] found id: ""
	I0717 23:00:21.757011   54248 logs.go:284] 1 containers: [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2]
	I0717 23:00:21.757078   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.761611   54248 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:21.761681   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:21.795689   54248 cri.go:89] found id: "828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:21.795709   54248 cri.go:89] found id: ""
	I0717 23:00:21.795716   54248 logs.go:284] 1 containers: [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594]
	I0717 23:00:21.795767   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.800172   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:21.800236   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:21.833931   54248 cri.go:89] found id: "a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:21.833957   54248 cri.go:89] found id: ""
	I0717 23:00:21.833968   54248 logs.go:284] 1 containers: [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e]
	I0717 23:00:21.834026   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.839931   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:21.840003   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:21.874398   54248 cri.go:89] found id: "5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:21.874423   54248 cri.go:89] found id: ""
	I0717 23:00:21.874432   54248 logs.go:284] 1 containers: [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea]
	I0717 23:00:21.874489   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.878922   54248 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:21.878986   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:21.913781   54248 cri.go:89] found id: "0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:21.913812   54248 cri.go:89] found id: ""
	I0717 23:00:21.913821   54248 logs.go:284] 1 containers: [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135]
	I0717 23:00:21.913877   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.918217   54248 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:21.918284   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:21.951832   54248 cri.go:89] found id: ""
	I0717 23:00:21.951859   54248 logs.go:284] 0 containers: []
	W0717 23:00:21.951869   54248 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:21.951876   54248 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:21.951925   54248 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:21.987514   54248 cri.go:89] found id: "9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:21.987543   54248 cri.go:89] found id: ""
	I0717 23:00:21.987553   54248 logs.go:284] 1 containers: [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87]
	I0717 23:00:21.987617   54248 ssh_runner.go:195] Run: which crictl
	I0717 23:00:21.992144   54248 logs.go:123] Gathering logs for container status ...
	I0717 23:00:21.992164   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:22.031685   54248 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:22.031715   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:22.046652   54248 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:22.046691   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:22.191164   54248 logs.go:123] Gathering logs for kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] ...
	I0717 23:00:22.191191   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e"
	I0717 23:00:22.233174   54248 logs.go:123] Gathering logs for kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] ...
	I0717 23:00:22.233209   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea"
	I0717 23:00:22.279246   54248 logs.go:123] Gathering logs for kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] ...
	I0717 23:00:22.279273   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135"
	I0717 23:00:22.330534   54248 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:22.330565   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:22.837335   54248 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:22.837382   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 23:00:22.947015   54248 logs.go:123] Gathering logs for kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] ...
	I0717 23:00:22.947073   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c"
	I0717 23:00:22.991731   54248 logs.go:123] Gathering logs for etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] ...
	I0717 23:00:22.991768   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2"
	I0717 23:00:23.036115   54248 logs.go:123] Gathering logs for coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] ...
	I0717 23:00:23.036146   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594"
	I0717 23:00:23.071825   54248 logs.go:123] Gathering logs for storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] ...
	I0717 23:00:23.071860   54248 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87"
	I0717 23:00:25.629247   54248 system_pods.go:59] 8 kube-system pods found
	I0717 23:00:25.629277   54248 system_pods.go:61] "coredns-5d78c9869d-6ljtn" [9488690c-8407-42ce-9938-039af0fa2c4d] Running
	I0717 23:00:25.629284   54248 system_pods.go:61] "etcd-embed-certs-571296" [e6e8b5d1-b1e7-4c3d-89d7-f44a2a6aff8b] Running
	I0717 23:00:25.629291   54248 system_pods.go:61] "kube-apiserver-embed-certs-571296" [3b5f5396-d325-445c-b3af-4cc7a506143e] Running
	I0717 23:00:25.629298   54248 system_pods.go:61] "kube-controller-manager-embed-certs-571296" [e113ffeb-97bd-4b0d-a432-b58be43b295b] Running
	I0717 23:00:25.629305   54248 system_pods.go:61] "kube-proxy-xjpds" [7c074cca-2579-4a54-bf55-77bba0fbcd34] Running
	I0717 23:00:25.629311   54248 system_pods.go:61] "kube-scheduler-embed-certs-571296" [1d192365-8c7b-4367-b4b0-fe9f6f5874af] Running
	I0717 23:00:25.629320   54248 system_pods.go:61] "metrics-server-74d5c6b9c-cknmm" [d1fb930f-518d-4ff4-94fe-7743ab55ecc6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:00:25.629331   54248 system_pods.go:61] "storage-provisioner" [1138e736-ef8d-4d24-86d5-cac3f58f0dd6] Running
	I0717 23:00:25.629339   54248 system_pods.go:74] duration metric: took 3.942938415s to wait for pod list to return data ...
	I0717 23:00:25.629347   54248 default_sa.go:34] waiting for default service account to be created ...
	I0717 23:00:25.632079   54248 default_sa.go:45] found service account: "default"
	I0717 23:00:25.632105   54248 default_sa.go:55] duration metric: took 2.751332ms for default service account to be created ...
	I0717 23:00:25.632114   54248 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 23:00:25.639267   54248 system_pods.go:86] 8 kube-system pods found
	I0717 23:00:25.639297   54248 system_pods.go:89] "coredns-5d78c9869d-6ljtn" [9488690c-8407-42ce-9938-039af0fa2c4d] Running
	I0717 23:00:25.639305   54248 system_pods.go:89] "etcd-embed-certs-571296" [e6e8b5d1-b1e7-4c3d-89d7-f44a2a6aff8b] Running
	I0717 23:00:25.639312   54248 system_pods.go:89] "kube-apiserver-embed-certs-571296" [3b5f5396-d325-445c-b3af-4cc7a506143e] Running
	I0717 23:00:25.639321   54248 system_pods.go:89] "kube-controller-manager-embed-certs-571296" [e113ffeb-97bd-4b0d-a432-b58be43b295b] Running
	I0717 23:00:25.639328   54248 system_pods.go:89] "kube-proxy-xjpds" [7c074cca-2579-4a54-bf55-77bba0fbcd34] Running
	I0717 23:00:25.639335   54248 system_pods.go:89] "kube-scheduler-embed-certs-571296" [1d192365-8c7b-4367-b4b0-fe9f6f5874af] Running
	I0717 23:00:25.639345   54248 system_pods.go:89] "metrics-server-74d5c6b9c-cknmm" [d1fb930f-518d-4ff4-94fe-7743ab55ecc6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:00:25.639353   54248 system_pods.go:89] "storage-provisioner" [1138e736-ef8d-4d24-86d5-cac3f58f0dd6] Running
	I0717 23:00:25.639362   54248 system_pods.go:126] duration metric: took 7.242476ms to wait for k8s-apps to be running ...
	I0717 23:00:25.639374   54248 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 23:00:25.639426   54248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:00:25.654026   54248 system_svc.go:56] duration metric: took 14.646361ms WaitForService to wait for kubelet.
	I0717 23:00:25.654049   54248 kubeadm.go:581] duration metric: took 4m17.068957071s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 23:00:25.654069   54248 node_conditions.go:102] verifying NodePressure condition ...
	I0717 23:00:25.658024   54248 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 23:00:25.658049   54248 node_conditions.go:123] node cpu capacity is 2
	I0717 23:00:25.658058   54248 node_conditions.go:105] duration metric: took 3.985859ms to run NodePressure ...
	I0717 23:00:25.658069   54248 start.go:228] waiting for startup goroutines ...
	I0717 23:00:25.658074   54248 start.go:233] waiting for cluster config update ...
	I0717 23:00:25.658083   54248 start.go:242] writing updated cluster config ...
	I0717 23:00:25.658335   54248 ssh_runner.go:195] Run: rm -f paused
	I0717 23:00:25.709576   54248 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 23:00:25.711805   54248 out.go:177] * Done! kubectl is now configured to use "embed-certs-571296" cluster and "default" namespace by default
	I0717 23:00:24.128252   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:26.130357   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:28.627639   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:30.627679   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:33.128946   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:35.627313   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:37.627998   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:40.128503   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:42.629092   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:45.126773   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:47.127774   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:49.128495   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:51.628994   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:54.127925   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:56.128908   54649 pod_ready.go:102] pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace has status "Ready":"False"
	I0717 23:00:56.725699   54649 pod_ready.go:81] duration metric: took 4m0.000620769s waiting for pod "metrics-server-74d5c6b9c-j8f2f" in "kube-system" namespace to be "Ready" ...
	E0717 23:00:56.725751   54649 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 23:00:56.725769   54649 pod_ready.go:38] duration metric: took 4m2.87768055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 23:00:56.725797   54649 api_server.go:52] waiting for apiserver process to appear ...
	I0717 23:00:56.725839   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:00:56.725908   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:00:56.788229   54649 cri.go:89] found id: "45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:00:56.788257   54649 cri.go:89] found id: ""
	I0717 23:00:56.788266   54649 logs.go:284] 1 containers: [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0]
	I0717 23:00:56.788337   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.793647   54649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:00:56.793709   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:00:56.828720   54649 cri.go:89] found id: "7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:00:56.828741   54649 cri.go:89] found id: ""
	I0717 23:00:56.828748   54649 logs.go:284] 1 containers: [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab]
	I0717 23:00:56.828790   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.833266   54649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:00:56.833339   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:00:56.865377   54649 cri.go:89] found id: "30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:00:56.865407   54649 cri.go:89] found id: ""
	I0717 23:00:56.865416   54649 logs.go:284] 1 containers: [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223]
	I0717 23:00:56.865478   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.870881   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:00:56.870944   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:00:56.908871   54649 cri.go:89] found id: "4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:00:56.908891   54649 cri.go:89] found id: ""
	I0717 23:00:56.908899   54649 logs.go:284] 1 containers: [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad]
	I0717 23:00:56.908952   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.913121   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:00:56.913171   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:00:56.946752   54649 cri.go:89] found id: "a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:00:56.946797   54649 cri.go:89] found id: ""
	I0717 23:00:56.946806   54649 logs.go:284] 1 containers: [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524]
	I0717 23:00:56.946864   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.951141   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:00:56.951216   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:00:56.986967   54649 cri.go:89] found id: "7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:00:56.986987   54649 cri.go:89] found id: ""
	I0717 23:00:56.986996   54649 logs.go:284] 1 containers: [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10]
	I0717 23:00:56.987039   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:56.993578   54649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:00:56.993655   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:00:57.030468   54649 cri.go:89] found id: ""
	I0717 23:00:57.030491   54649 logs.go:284] 0 containers: []
	W0717 23:00:57.030498   54649 logs.go:286] No container was found matching "kindnet"
	I0717 23:00:57.030503   54649 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:00:57.030548   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:00:57.070533   54649 cri.go:89] found id: "4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:00:57.070564   54649 cri.go:89] found id: ""
	I0717 23:00:57.070574   54649 logs.go:284] 1 containers: [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6]
	I0717 23:00:57.070632   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:00:57.075379   54649 logs.go:123] Gathering logs for container status ...
	I0717 23:00:57.075685   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:00:57.121312   54649 logs.go:123] Gathering logs for kubelet ...
	I0717 23:00:57.121343   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 23:00:57.222647   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:00:57.222960   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:00:57.251443   54649 logs.go:123] Gathering logs for dmesg ...
	I0717 23:00:57.251481   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:00:57.266213   54649 logs.go:123] Gathering logs for coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] ...
	I0717 23:00:57.266242   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:00:57.304032   54649 logs.go:123] Gathering logs for kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] ...
	I0717 23:00:57.304058   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:00:57.342839   54649 logs.go:123] Gathering logs for storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] ...
	I0717 23:00:57.342865   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:00:57.378086   54649 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:00:57.378118   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:00:57.893299   54649 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:00:57.893338   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:00:58.043526   54649 logs.go:123] Gathering logs for kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] ...
	I0717 23:00:58.043564   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:00:58.096422   54649 logs.go:123] Gathering logs for etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] ...
	I0717 23:00:58.096452   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:00:58.141423   54649 logs.go:123] Gathering logs for kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] ...
	I0717 23:00:58.141452   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:00:58.183755   54649 logs.go:123] Gathering logs for kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] ...
	I0717 23:00:58.183792   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:00:58.239385   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:00:58.239418   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0717 23:00:58.239479   54649 out.go:239] X Problems detected in kubelet:
	W0717 23:00:58.239506   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:00:58.239522   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:00:58.239527   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:00:58.239533   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:01:08.241689   54649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:01:08.259063   54649 api_server.go:72] duration metric: took 4m17.020334708s to wait for apiserver process to appear ...
	I0717 23:01:08.259090   54649 api_server.go:88] waiting for apiserver healthz status ...
	I0717 23:01:08.259125   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:01:08.259186   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:01:08.289063   54649 cri.go:89] found id: "45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:08.289080   54649 cri.go:89] found id: ""
	I0717 23:01:08.289088   54649 logs.go:284] 1 containers: [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0]
	I0717 23:01:08.289146   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.293604   54649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:01:08.293668   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:01:08.323866   54649 cri.go:89] found id: "7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:08.323889   54649 cri.go:89] found id: ""
	I0717 23:01:08.323899   54649 logs.go:284] 1 containers: [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab]
	I0717 23:01:08.324251   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.330335   54649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:01:08.330405   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:01:08.380361   54649 cri.go:89] found id: "30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:08.380387   54649 cri.go:89] found id: ""
	I0717 23:01:08.380399   54649 logs.go:284] 1 containers: [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223]
	I0717 23:01:08.380458   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.384547   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:01:08.384612   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:01:08.416767   54649 cri.go:89] found id: "4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:08.416787   54649 cri.go:89] found id: ""
	I0717 23:01:08.416793   54649 logs.go:284] 1 containers: [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad]
	I0717 23:01:08.416836   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.420982   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:01:08.421031   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:01:08.451034   54649 cri.go:89] found id: "a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:08.451064   54649 cri.go:89] found id: ""
	I0717 23:01:08.451074   54649 logs.go:284] 1 containers: [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524]
	I0717 23:01:08.451126   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.455015   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:01:08.455063   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:01:08.486539   54649 cri.go:89] found id: "7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:08.486560   54649 cri.go:89] found id: ""
	I0717 23:01:08.486567   54649 logs.go:284] 1 containers: [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10]
	I0717 23:01:08.486620   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.491106   54649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:01:08.491171   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:01:08.523068   54649 cri.go:89] found id: ""
	I0717 23:01:08.523099   54649 logs.go:284] 0 containers: []
	W0717 23:01:08.523109   54649 logs.go:286] No container was found matching "kindnet"
	I0717 23:01:08.523116   54649 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:01:08.523201   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:01:08.556090   54649 cri.go:89] found id: "4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:08.556116   54649 cri.go:89] found id: ""
	I0717 23:01:08.556125   54649 logs.go:284] 1 containers: [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6]
	I0717 23:01:08.556181   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:08.560278   54649 logs.go:123] Gathering logs for storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] ...
	I0717 23:01:08.560301   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:08.595021   54649 logs.go:123] Gathering logs for container status ...
	I0717 23:01:08.595052   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:01:08.640723   54649 logs.go:123] Gathering logs for dmesg ...
	I0717 23:01:08.640757   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:01:08.654641   54649 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:01:08.654679   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:01:08.789999   54649 logs.go:123] Gathering logs for kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] ...
	I0717 23:01:08.790026   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:08.837387   54649 logs.go:123] Gathering logs for coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] ...
	I0717 23:01:08.837420   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:08.871514   54649 logs.go:123] Gathering logs for kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] ...
	I0717 23:01:08.871565   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:08.911626   54649 logs.go:123] Gathering logs for kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] ...
	I0717 23:01:08.911657   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:08.961157   54649 logs.go:123] Gathering logs for kubelet ...
	I0717 23:01:08.961192   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 23:01:09.040804   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:09.040992   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:09.067178   54649 logs.go:123] Gathering logs for etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] ...
	I0717 23:01:09.067213   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:09.104138   54649 logs.go:123] Gathering logs for kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] ...
	I0717 23:01:09.104170   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:09.146623   54649 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:01:09.146653   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:01:09.681092   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:09.681128   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0717 23:01:09.681200   54649 out.go:239] X Problems detected in kubelet:
	W0717 23:01:09.681217   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:09.681229   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:09.681237   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:09.681244   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:01:19.682682   54649 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8444/healthz ...
	I0717 23:01:19.688102   54649 api_server.go:279] https://192.168.72.118:8444/healthz returned 200:
	ok
	I0717 23:01:19.689304   54649 api_server.go:141] control plane version: v1.27.3
	I0717 23:01:19.689323   54649 api_server.go:131] duration metric: took 11.430226781s to wait for apiserver health ...
	I0717 23:01:19.689330   54649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 23:01:19.689349   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 23:01:19.689393   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 23:01:19.731728   54649 cri.go:89] found id: "45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:19.731748   54649 cri.go:89] found id: ""
	I0717 23:01:19.731756   54649 logs.go:284] 1 containers: [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0]
	I0717 23:01:19.731807   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.737797   54649 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 23:01:19.737857   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 23:01:19.776355   54649 cri.go:89] found id: "7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:19.776377   54649 cri.go:89] found id: ""
	I0717 23:01:19.776385   54649 logs.go:284] 1 containers: [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab]
	I0717 23:01:19.776438   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.780589   54649 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 23:01:19.780645   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 23:01:19.810917   54649 cri.go:89] found id: "30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:19.810938   54649 cri.go:89] found id: ""
	I0717 23:01:19.810947   54649 logs.go:284] 1 containers: [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223]
	I0717 23:01:19.811001   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.815185   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 23:01:19.815252   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 23:01:19.852138   54649 cri.go:89] found id: "4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:19.852161   54649 cri.go:89] found id: ""
	I0717 23:01:19.852170   54649 logs.go:284] 1 containers: [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad]
	I0717 23:01:19.852225   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.856947   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 23:01:19.857012   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 23:01:19.893668   54649 cri.go:89] found id: "a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:19.893695   54649 cri.go:89] found id: ""
	I0717 23:01:19.893705   54649 logs.go:284] 1 containers: [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524]
	I0717 23:01:19.893763   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.897862   54649 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 23:01:19.897915   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 23:01:19.935000   54649 cri.go:89] found id: "7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:19.935024   54649 cri.go:89] found id: ""
	I0717 23:01:19.935033   54649 logs.go:284] 1 containers: [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10]
	I0717 23:01:19.935097   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:19.939417   54649 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 23:01:19.939487   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 23:01:19.971266   54649 cri.go:89] found id: ""
	I0717 23:01:19.971296   54649 logs.go:284] 0 containers: []
	W0717 23:01:19.971305   54649 logs.go:286] No container was found matching "kindnet"
	I0717 23:01:19.971313   54649 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 23:01:19.971374   54649 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 23:01:20.007281   54649 cri.go:89] found id: "4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:20.007299   54649 cri.go:89] found id: ""
	I0717 23:01:20.007306   54649 logs.go:284] 1 containers: [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6]
	I0717 23:01:20.007351   54649 ssh_runner.go:195] Run: which crictl
	I0717 23:01:20.011751   54649 logs.go:123] Gathering logs for describe nodes ...
	I0717 23:01:20.011776   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 23:01:20.146025   54649 logs.go:123] Gathering logs for kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] ...
	I0717 23:01:20.146052   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0"
	I0717 23:01:20.197984   54649 logs.go:123] Gathering logs for etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] ...
	I0717 23:01:20.198014   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab"
	I0717 23:01:20.240729   54649 logs.go:123] Gathering logs for coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] ...
	I0717 23:01:20.240765   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223"
	I0717 23:01:20.280904   54649 logs.go:123] Gathering logs for kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] ...
	I0717 23:01:20.280931   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad"
	I0717 23:01:20.338648   54649 logs.go:123] Gathering logs for storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] ...
	I0717 23:01:20.338679   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6"
	I0717 23:01:20.378549   54649 logs.go:123] Gathering logs for CRI-O ...
	I0717 23:01:20.378586   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 23:01:20.858716   54649 logs.go:123] Gathering logs for kubelet ...
	I0717 23:01:20.858759   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 23:01:20.944347   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:20.944538   54649 logs.go:138] Found kubelet problem: Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:20.971487   54649 logs.go:123] Gathering logs for kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] ...
	I0717 23:01:20.971520   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524"
	I0717 23:01:21.007705   54649 logs.go:123] Gathering logs for kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] ...
	I0717 23:01:21.007736   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10"
	I0717 23:01:21.059674   54649 logs.go:123] Gathering logs for container status ...
	I0717 23:01:21.059703   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 23:01:21.095693   54649 logs.go:123] Gathering logs for dmesg ...
	I0717 23:01:21.095722   54649 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 23:01:21.110247   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:21.110273   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0717 23:01:21.110336   54649 out.go:239] X Problems detected in kubelet:
	W0717 23:01:21.110354   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: W0717 22:56:50.841820    3828 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	W0717 23:01:21.110364   54649 out.go:239]   Jul 17 22:56:50 default-k8s-diff-port-504828 kubelet[3828]: E0717 22:56:50.841863    3828 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-504828" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-504828' and this object
	I0717 23:01:21.110371   54649 out.go:309] Setting ErrFile to fd 2...
	I0717 23:01:21.110379   54649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:01:31.121237   54649 system_pods.go:59] 8 kube-system pods found
	I0717 23:01:31.121266   54649 system_pods.go:61] "coredns-5d78c9869d-rqcjj" [9f3bc4cf-fb20-413e-b367-27bcb997ab80] Running
	I0717 23:01:31.121272   54649 system_pods.go:61] "etcd-default-k8s-diff-port-504828" [1e432373-0f87-4cda-969e-492a8b534af0] Running
	I0717 23:01:31.121280   54649 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504828" [573bd1d1-09ff-40b5-9746-0b3fa3d51f08] Running
	I0717 23:01:31.121290   54649 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504828" [c6baeefc-57b7-4710-998c-0af932d2db14] Running
	I0717 23:01:31.121299   54649 system_pods.go:61] "kube-proxy-nmtc8" [1f8a0182-d1df-4609-86d1-7695a138e32f] Running
	I0717 23:01:31.121307   54649 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504828" [df487feb-f937-4832-ad65-38718d4325c5] Running
	I0717 23:01:31.121317   54649 system_pods.go:61] "metrics-server-74d5c6b9c-j8f2f" [328c892b-7402-480b-bc29-a316c8fb7b1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:01:31.121339   54649 system_pods.go:61] "storage-provisioner" [0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1] Running
	I0717 23:01:31.121347   54649 system_pods.go:74] duration metric: took 11.432011006s to wait for pod list to return data ...
	I0717 23:01:31.121357   54649 default_sa.go:34] waiting for default service account to be created ...
	I0717 23:01:31.124377   54649 default_sa.go:45] found service account: "default"
	I0717 23:01:31.124403   54649 default_sa.go:55] duration metric: took 3.036772ms for default service account to be created ...
	I0717 23:01:31.124413   54649 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 23:01:31.131080   54649 system_pods.go:86] 8 kube-system pods found
	I0717 23:01:31.131116   54649 system_pods.go:89] "coredns-5d78c9869d-rqcjj" [9f3bc4cf-fb20-413e-b367-27bcb997ab80] Running
	I0717 23:01:31.131125   54649 system_pods.go:89] "etcd-default-k8s-diff-port-504828" [1e432373-0f87-4cda-969e-492a8b534af0] Running
	I0717 23:01:31.131132   54649 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-504828" [573bd1d1-09ff-40b5-9746-0b3fa3d51f08] Running
	I0717 23:01:31.131140   54649 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-504828" [c6baeefc-57b7-4710-998c-0af932d2db14] Running
	I0717 23:01:31.131151   54649 system_pods.go:89] "kube-proxy-nmtc8" [1f8a0182-d1df-4609-86d1-7695a138e32f] Running
	I0717 23:01:31.131158   54649 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-504828" [df487feb-f937-4832-ad65-38718d4325c5] Running
	I0717 23:01:31.131182   54649 system_pods.go:89] "metrics-server-74d5c6b9c-j8f2f" [328c892b-7402-480b-bc29-a316c8fb7b1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:01:31.131190   54649 system_pods.go:89] "storage-provisioner" [0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1] Running
	I0717 23:01:31.131204   54649 system_pods.go:126] duration metric: took 6.785139ms to wait for k8s-apps to be running ...
	I0717 23:01:31.131211   54649 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 23:01:31.131260   54649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:01:31.150458   54649 system_svc.go:56] duration metric: took 19.234064ms WaitForService to wait for kubelet.
	I0717 23:01:31.150495   54649 kubeadm.go:581] duration metric: took 4m39.911769992s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 23:01:31.150523   54649 node_conditions.go:102] verifying NodePressure condition ...
	I0717 23:01:31.153677   54649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 23:01:31.153700   54649 node_conditions.go:123] node cpu capacity is 2
	I0717 23:01:31.153710   54649 node_conditions.go:105] duration metric: took 3.182344ms to run NodePressure ...
	I0717 23:01:31.153720   54649 start.go:228] waiting for startup goroutines ...
	I0717 23:01:31.153726   54649 start.go:233] waiting for cluster config update ...
	I0717 23:01:31.153737   54649 start.go:242] writing updated cluster config ...
	I0717 23:01:31.153995   54649 ssh_runner.go:195] Run: rm -f paused
	I0717 23:01:31.204028   54649 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 23:01:31.207280   54649 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-504828" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 22:51:25 UTC, ends at Mon 2023-07-17 23:10:05 UTC. --
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.255950234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8b087ab7-1aff-4504-91c9-017146318c75 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.256240463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8b087ab7-1aff-4504-91c9-017146318c75 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.292458966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=95b799ad-25e6-40d7-84bf-846328ead057 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.292550678Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=95b799ad-25e6-40d7-84bf-846328ead057 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.292713476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=95b799ad-25e6-40d7-84bf-846328ead057 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.325600925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=24a0fc7b-5edf-45b5-8b3a-67ac0912e950 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.325665258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=24a0fc7b-5edf-45b5-8b3a-67ac0912e950 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.325827068Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=24a0fc7b-5edf-45b5-8b3a-67ac0912e950 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.360071890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c2b881c9-d8ca-4694-b99e-5636db96d8d9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.360226103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c2b881c9-d8ca-4694-b99e-5636db96d8d9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.360398833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c2b881c9-d8ca-4694-b99e-5636db96d8d9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.393991734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5be7a1d6-605f-4695-a49a-a45e5da3d8a6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.394129379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5be7a1d6-605f-4695-a49a-a45e5da3d8a6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.394400197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5be7a1d6-605f-4695-a49a-a45e5da3d8a6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.408935246Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3eaf8d73-7c1f-4963-a83c-94894d82f603 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.409225705Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8846d2fa31c7a87d4e295017e6f2a257a59c6e2c81fd9b1320f17a2ce7e6d7d2,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-59wx5,Uid:3ddd38f4-fe18-4e49-bff3-f8f73a688b98,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634662855423503,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-59wx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ddd38f4-fe18-4e49-bff3-f8f73a688b98,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:57:42.202517062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7158a1e3-713a-4702-b1d8-3553d7dfa0
de,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634661449069215,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T22:57:41.082539509Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&PodSandboxMetadata{Name:kube-proxy-dpnlw,Uid:eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634660643992925,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:57:38.769122644Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-t4d2t
,Uid:5e8166c1-8b07-4eca-9d2a-51d2142e7c08,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634660035973514,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:57:38.769063591Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-332820,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634632642752874,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca
1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-07-17T22:57:12.216000153Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-332820,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634632630062437,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-07-17T22:57:12.214559532Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{I
d:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-332820,Uid:c731a3514f98e74d0c0e942b30282b55,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634632603045216,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c731a3514f98e74d0c0e942b30282b55,kubernetes.io/config.seen: 2023-07-17T22:57:12.217089699Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-332820,Uid:e0ef24da77c8ba3e688845e562219102,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634632559916807,Labels:map[string]string{component: kube-apiserver,io
.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e0ef24da77c8ba3e688845e562219102,kubernetes.io/config.seen: 2023-07-17T22:57:12.214501287Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=3eaf8d73-7c1f-4963-a83c-94894d82f603 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.409944621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9d9ba05f-b6fd-48b8-9b7c-1ca920af6363 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.410025250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9d9ba05f-b6fd-48b8-9b7c-1ca920af6363 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.410318571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9d9ba05f-b6fd-48b8-9b7c-1ca920af6363 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.434322436Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7c53ba55-63b8-4a0e-b166-1bc4787d44f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.434410326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7c53ba55-63b8-4a0e-b166-1bc4787d44f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.434579749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7c53ba55-63b8-4a0e-b166-1bc4787d44f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.467429515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=579d52d1-a990-4808-bfff-4d5c4c993639 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.467548534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=579d52d1-a990-4808-bfff-4d5c4c993639 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:10:05 old-k8s-version-332820 crio[709]: time="2023-07-17 23:10:05.467774509Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e,PodSandboxId:96f5efbc2487124c26564cc4967d7ecc46a88ab84ae597c4236cedbbccc2f917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634662512907550,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7158a1e3-713a-4702-b1d8-3553d7dfa0de,},Annotations:map[string]string{io.kubernetes.container.hash: 93fd4bd5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7,PodSandboxId:cbec98d5739c990986e9bb6c29758fd427f861cf2412bac4362f159bf0cf472c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1689634662106576475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dpnlw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb78806d-3e64-4d07-a9d5-6bebaa1abe2d,},Annotations:map[string]string{io.kubernetes.container.hash: 17922504,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e,PodSandboxId:13a17920eb9da28301c091437bce641b398342ccf80bb56499db77b8a2013552,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1689634660694109931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-t4d2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e8166c1-8b07-4eca-9d2a-51d2142e7c08,},Annotations:map[string]string{io.kubernetes.container.hash: daf35be2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd,PodSandboxId:84b5c00c0c09a074b5bb0c34ab9fb6d952424f5bb108bdf5aacb13f65f2e0ff6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1689634635016315026,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c731a3514f98e74d0c0e942b30282b55,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 9d50df90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121,PodSandboxId:0d0464abe6c14b9c7d15a1c003463fbccee13d2ed04534936a99abbd041ca8fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1689634633474882471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7,PodSandboxId:be9c23f96cb9c559ef50a19208e0fced8182189d75176cdb4bd1e9e2557ec0f9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1689634633330967857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4,PodSandboxId:eab3e1882343b3a001a34fa04bce2e74487c01e5ed7245652cdd744f20bf107f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1689634633168328665,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-332820,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0ef24da77c8ba3e688845e562219102,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7203d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=579d52d1-a990-4808-bfff-4d5c4c993639 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	62b724cfd1a63       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   96f5efbc24871
	1acb9b6c61f5f       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   12 minutes ago      Running             kube-proxy                0                   cbec98d5739c9
	9f89a87992124       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   12 minutes ago      Running             coredns                   0                   13a17920eb9da
	b5359112c46eb       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   12 minutes ago      Running             etcd                      0                   84b5c00c0c09a
	88888fbeeecaa       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   12 minutes ago      Running             kube-scheduler            0                   0d0464abe6c14
	f35cc67eaadee       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   12 minutes ago      Running             kube-controller-manager   0                   be9c23f96cb9c
	41388bef09878       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   12 minutes ago      Running             kube-apiserver            0                   eab3e1882343b
	
	* 
	* ==> coredns [9f89a87992124015181bbada6ad53a47d8a1b680af2a42284ebb50ebc0e56c3e] <==
	* .:53
	2023-07-17T22:57:41.166Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-07-17T22:57:41.166Z [INFO] CoreDNS-1.6.2
	2023-07-17T22:57:41.166Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-07-17T22:58:14.004Z [INFO] plugin/reload: Running configuration MD5 = 06ff7f9bb57317d7ab02f5fb9baaa00d
	[INFO] Reloading complete
	2023-07-17T22:58:14.013Z [INFO] 127.0.0.1:37326 - 45130 "HINFO IN 6798697741476462037.7490281844572158290. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009508661s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-332820
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-332820
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=old-k8s-version-332820
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_57_23_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:57:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 23:09:19 +0000   Mon, 17 Jul 2023 22:57:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 23:09:19 +0000   Mon, 17 Jul 2023 22:57:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 23:09:19 +0000   Mon, 17 Jul 2023 22:57:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 23:09:19 +0000   Mon, 17 Jul 2023 22:57:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.149
	  Hostname:    old-k8s-version-332820
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 8eea51a4a36646208bfdf952d5c22016
	 System UUID:                8eea51a4-a366-4620-8bfd-f952d5c22016
	 Boot ID:                    f5937962-8992-4fbd-b792-6457e4896f08
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-t4d2t                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                etcd-old-k8s-version-332820                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-apiserver-old-k8s-version-332820             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-332820    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-dpnlw                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-332820             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                metrics-server-74d5856cc6-59wx5                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-332820     Node old-k8s-version-332820 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet, old-k8s-version-332820     Node old-k8s-version-332820 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet, old-k8s-version-332820     Node old-k8s-version-332820 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-332820  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jul17 22:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.083287] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.653381] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.324753] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.164626] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.548718] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.575410] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.155978] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.162368] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.138267] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.264471] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[ +20.246133] systemd-fstab-generator[1026]: Ignoring "noauto" for root device
	[  +0.490550] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul17 22:52] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.389388] kauditd_printk_skb: 2 callbacks suppressed
	[Jul17 22:56] kauditd_printk_skb: 3 callbacks suppressed
	[Jul17 22:57] systemd-fstab-generator[3227]: Ignoring "noauto" for root device
	[ +39.443745] kauditd_printk_skb: 11 callbacks suppressed
	
	* 
	* ==> etcd [b5359112c46eb47cdd3af5d5aec19ff2abaabc91270863590570596650e3aecd] <==
	* 2023-07-17 22:57:15.161860 I | raft: d484739f521fd65e became follower at term 0
	2023-07-17 22:57:15.161880 I | raft: newRaft d484739f521fd65e [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-07-17 22:57:15.161903 I | raft: d484739f521fd65e became follower at term 1
	2023-07-17 22:57:15.171386 W | auth: simple token is not cryptographically signed
	2023-07-17 22:57:15.176736 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-07-17 22:57:15.178683 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-17 22:57:15.178938 I | embed: listening for metrics on http://192.168.50.149:2381
	2023-07-17 22:57:15.179558 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-17 22:57:15.180421 I | etcdserver: d484739f521fd65e as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-07-17 22:57:15.180780 I | etcdserver/membership: added member d484739f521fd65e [https://192.168.50.149:2380] to cluster 5bc15d5d2e20321
	2023-07-17 22:57:15.362429 I | raft: d484739f521fd65e is starting a new election at term 1
	2023-07-17 22:57:15.362549 I | raft: d484739f521fd65e became candidate at term 2
	2023-07-17 22:57:15.362640 I | raft: d484739f521fd65e received MsgVoteResp from d484739f521fd65e at term 2
	2023-07-17 22:57:15.362698 I | raft: d484739f521fd65e became leader at term 2
	2023-07-17 22:57:15.362722 I | raft: raft.node: d484739f521fd65e elected leader d484739f521fd65e at term 2
	2023-07-17 22:57:15.363416 I | etcdserver: setting up the initial cluster version to 3.3
	2023-07-17 22:57:15.364522 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-07-17 22:57:15.364587 I | etcdserver/api: enabled capabilities for version 3.3
	2023-07-17 22:57:15.364628 I | etcdserver: published {Name:old-k8s-version-332820 ClientURLs:[https://192.168.50.149:2379]} to cluster 5bc15d5d2e20321
	2023-07-17 22:57:15.364904 I | embed: ready to serve client requests
	2023-07-17 22:57:15.366115 I | embed: serving client requests on 192.168.50.149:2379
	2023-07-17 22:57:15.366497 I | embed: ready to serve client requests
	2023-07-17 22:57:15.367554 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-17 23:07:15.567040 I | mvcc: store.index: compact 669
	2023-07-17 23:07:15.569060 I | mvcc: finished scheduled compaction at 669 (took 1.278024ms)
	
	* 
	* ==> kernel <==
	*  23:10:05 up 18 min,  0 users,  load average: 0.04, 0.13, 0.12
	Linux old-k8s-version-332820 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [41388bef09878d329575d8894181165b1675bef77f376fc71aa746031a1686b4] <==
	* I0717 23:02:19.837712       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 23:02:19.838048       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 23:02:19.838130       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:02:19.838153       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:03:19.838741       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 23:03:19.839054       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 23:03:19.839148       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:03:19.839265       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:05:19.839695       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 23:05:19.839824       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 23:05:19.839903       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:05:19.839910       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:07:19.841548       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 23:07:19.841888       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 23:07:19.842010       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:07:19.842041       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:08:19.842324       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0717 23:08:19.842586       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 23:08:19.842691       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:08:19.842723       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [f35cc67eaadee92a13127b4c9dd501d146a76b99d81048fa3c9557ab02e9a2d7] <==
	* E0717 23:03:41.826382       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:04:02.763469       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:04:12.078956       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:04:34.765895       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:04:42.331103       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:05:06.768431       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:05:12.583731       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:05:38.769836       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:05:42.835502       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:06:10.772089       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:06:13.087578       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:06:42.774433       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:06:43.340297       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0717 23:07:13.592552       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:07:14.777579       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:07:43.845453       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:07:46.780088       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:08:14.097757       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:08:18.782390       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:08:44.349638       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:08:50.784391       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:09:14.601878       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:09:22.786539       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 23:09:44.854459       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0717 23:09:54.789025       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [1acb9b6c61f5fc4da5ee0a3781cf9ffef68c4867fd872852acc5e4fc6d721bf7] <==
	* W0717 22:57:42.418330       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0717 22:57:42.431239       1 node.go:135] Successfully retrieved node IP: 192.168.50.149
	I0717 22:57:42.431345       1 server_others.go:149] Using iptables Proxier.
	I0717 22:57:42.432942       1 server.go:529] Version: v1.16.0
	I0717 22:57:42.436325       1 config.go:313] Starting service config controller
	I0717 22:57:42.436429       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0717 22:57:42.436472       1 config.go:131] Starting endpoints config controller
	I0717 22:57:42.436528       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0717 22:57:42.536778       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0717 22:57:42.539326       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [88888fbeeecaa7c6544687836984ad599b37fd124d1344cb23bd4d8200985121] <==
	* W0717 22:57:18.835439       1 authentication.go:79] Authentication is disabled
	I0717 22:57:18.835449       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0717 22:57:18.835813       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0717 22:57:18.887948       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:57:18.889347       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 22:57:18.890911       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:57:18.891156       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 22:57:18.891327       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:57:18.891406       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 22:57:18.891470       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:57:18.891839       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 22:57:18.891951       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:57:18.892985       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:57:18.893531       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 22:57:19.892109       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:57:19.892631       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 22:57:19.894907       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:57:19.899893       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 22:57:19.901535       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:57:19.903490       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 22:57:19.906527       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:57:19.907626       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 22:57:19.908727       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:57:19.921425       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:57:19.924522       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 22:51:25 UTC, ends at Mon 2023-07-17 23:10:05 UTC. --
	Jul 17 23:05:34 old-k8s-version-332820 kubelet[3233]: E0717 23:05:34.407721    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:05:49 old-k8s-version-332820 kubelet[3233]: E0717 23:05:49.407865    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:06:02 old-k8s-version-332820 kubelet[3233]: E0717 23:06:02.407463    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:06:15 old-k8s-version-332820 kubelet[3233]: E0717 23:06:15.407991    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:06:30 old-k8s-version-332820 kubelet[3233]: E0717 23:06:30.407900    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:06:43 old-k8s-version-332820 kubelet[3233]: E0717 23:06:43.408325    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:06:57 old-k8s-version-332820 kubelet[3233]: E0717 23:06:57.408577    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:07:08 old-k8s-version-332820 kubelet[3233]: E0717 23:07:08.407849    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:07:11 old-k8s-version-332820 kubelet[3233]: E0717 23:07:11.484381    3233 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jul 17 23:07:20 old-k8s-version-332820 kubelet[3233]: E0717 23:07:20.407847    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:07:34 old-k8s-version-332820 kubelet[3233]: E0717 23:07:34.407734    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:07:48 old-k8s-version-332820 kubelet[3233]: E0717 23:07:48.407539    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:07:59 old-k8s-version-332820 kubelet[3233]: E0717 23:07:59.408320    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:08:13 old-k8s-version-332820 kubelet[3233]: E0717 23:08:13.407580    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:08:27 old-k8s-version-332820 kubelet[3233]: E0717 23:08:27.428328    3233 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 23:08:27 old-k8s-version-332820 kubelet[3233]: E0717 23:08:27.428400    3233 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 23:08:27 old-k8s-version-332820 kubelet[3233]: E0717 23:08:27.428465    3233 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 23:08:27 old-k8s-version-332820 kubelet[3233]: E0717 23:08:27.428506    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jul 17 23:08:40 old-k8s-version-332820 kubelet[3233]: E0717 23:08:40.407755    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:08:54 old-k8s-version-332820 kubelet[3233]: E0717 23:08:54.408734    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:09:09 old-k8s-version-332820 kubelet[3233]: E0717 23:09:09.416130    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:09:22 old-k8s-version-332820 kubelet[3233]: E0717 23:09:22.407931    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:09:35 old-k8s-version-332820 kubelet[3233]: E0717 23:09:35.408155    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:09:49 old-k8s-version-332820 kubelet[3233]: E0717 23:09:49.408333    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 23:10:01 old-k8s-version-332820 kubelet[3233]: E0717 23:10:01.407958    3233 pod_workers.go:191] Error syncing pod 3ddd38f4-fe18-4e49-bff3-f8f73a688b98 ("metrics-server-74d5856cc6-59wx5_kube-system(3ddd38f4-fe18-4e49-bff3-f8f73a688b98)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [62b724cfd1a633a54cf3b5c3ecfd52b9c2e23a367496659e5e8e6692e8ac813e] <==
	* I0717 22:57:42.703939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 22:57:42.715523       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 22:57:42.715618       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 22:57:42.727810       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 22:57:42.729276       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-332820_0a5cd2fd-2dd8-41df-91a8-6b8401e0fdf5!
	I0717 22:57:42.731052       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c0cfa39-ec6e-4c49-aca3-a84ac182f2fb", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-332820_0a5cd2fd-2dd8-41df-91a8-6b8401e0fdf5 became leader
	I0717 22:57:42.830692       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-332820_0a5cd2fd-2dd8-41df-91a8-6b8401e0fdf5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-332820 -n old-k8s-version-332820
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-332820 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-59wx5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-332820 describe pod metrics-server-74d5856cc6-59wx5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-332820 describe pod metrics-server-74d5856cc6-59wx5: exit status 1 (78.705723ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-59wx5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-332820 describe pod metrics-server-74d5856cc6-59wx5: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (123.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (162.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-571296 -n embed-certs-571296
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-07-17 23:12:09.468083004 +0000 UTC m=+5493.257869000
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-571296 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-571296 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.11µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-571296 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-571296 -n embed-certs-571296
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-571296 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-571296 logs -n 25: (1.241655157s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-431736 sudo                            | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-431736                                 | NoKubernetes-431736          | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:42 UTC |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:44 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-332820        | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:42 UTC | 17 Jul 23 22:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-571296            | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-935524             | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:44 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-504828  | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-332820             | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-571296                 | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 23:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-935524                  | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504828       | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 22:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 23:01 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 23:10 UTC | 17 Jul 23 23:10 UTC |
	| start   | -p newest-cni-670356 --memory=2200 --alsologtostderr   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:10 UTC | 17 Jul 23 23:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-670356             | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:11 UTC | 17 Jul 23 23:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-670356                                   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:11 UTC | 17 Jul 23 23:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-670356                  | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:11 UTC | 17 Jul 23 23:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-670356 --memory=2200 --alsologtostderr   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:11 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 23:11:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 23:11:22.494157   59773 out.go:296] Setting OutFile to fd 1 ...
	I0717 23:11:22.494259   59773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:11:22.494269   59773 out.go:309] Setting ErrFile to fd 2...
	I0717 23:11:22.494274   59773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:11:22.494461   59773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 23:11:22.495020   59773 out.go:303] Setting JSON to false
	I0717 23:11:22.495880   59773 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10434,"bootTime":1689625048,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 23:11:22.495942   59773 start.go:138] virtualization: kvm guest
	I0717 23:11:22.498356   59773 out.go:177] * [newest-cni-670356] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 23:11:22.500218   59773 notify.go:220] Checking for updates...
	I0717 23:11:22.500233   59773 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 23:11:22.502006   59773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 23:11:22.503537   59773 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 23:11:22.505071   59773 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 23:11:22.506534   59773 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 23:11:22.507981   59773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 23:11:22.509678   59773 config.go:182] Loaded profile config "newest-cni-670356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:11:22.510061   59773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 23:11:22.510105   59773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 23:11:22.524771   59773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43701
	I0717 23:11:22.525241   59773 main.go:141] libmachine: () Calling .GetVersion
	I0717 23:11:22.525961   59773 main.go:141] libmachine: Using API Version  1
	I0717 23:11:22.525984   59773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 23:11:22.526378   59773 main.go:141] libmachine: () Calling .GetMachineName
	I0717 23:11:22.526568   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:11:22.526817   59773 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 23:11:22.527303   59773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 23:11:22.527346   59773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 23:11:22.544235   59773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39505
	I0717 23:11:22.544715   59773 main.go:141] libmachine: () Calling .GetVersion
	I0717 23:11:22.545361   59773 main.go:141] libmachine: Using API Version  1
	I0717 23:11:22.545394   59773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 23:11:22.545811   59773 main.go:141] libmachine: () Calling .GetMachineName
	I0717 23:11:22.546029   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:11:22.582834   59773 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 23:11:22.584275   59773 start.go:298] selected driver: kvm2
	I0717 23:11:22.584288   59773 start.go:880] validating driver "kvm2" against &{Name:newest-cni-670356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-670
356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.145 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledSt
op:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:11:22.584403   59773 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 23:11:22.585070   59773 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 23:11:22.585151   59773 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 23:11:22.600238   59773 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 23:11:22.600623   59773 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 23:11:22.600664   59773 cni.go:84] Creating CNI manager for ""
	I0717 23:11:22.600674   59773 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 23:11:22.600685   59773 start_flags.go:319] config:
	{Name:newest-cni-670356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-670356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.145 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:11:22.600841   59773 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 23:11:22.602758   59773 out.go:177] * Starting control plane node newest-cni-670356 in cluster newest-cni-670356
	I0717 23:11:22.604097   59773 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 23:11:22.604129   59773 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 23:11:22.604142   59773 cache.go:57] Caching tarball of preloaded images
	I0717 23:11:22.604237   59773 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 23:11:22.604257   59773 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 23:11:22.604398   59773 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/newest-cni-670356/config.json ...
	I0717 23:11:22.604607   59773 start.go:365] acquiring machines lock for newest-cni-670356: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 23:11:22.604680   59773 start.go:369] acquired machines lock for "newest-cni-670356" in 53.612µs
	I0717 23:11:22.604702   59773 start.go:96] Skipping create...Using existing machine configuration
	I0717 23:11:22.604711   59773 fix.go:54] fixHost starting: 
	I0717 23:11:22.604978   59773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 23:11:22.605013   59773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 23:11:22.619579   59773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I0717 23:11:22.619994   59773 main.go:141] libmachine: () Calling .GetVersion
	I0717 23:11:22.620462   59773 main.go:141] libmachine: Using API Version  1
	I0717 23:11:22.620485   59773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 23:11:22.620778   59773 main.go:141] libmachine: () Calling .GetMachineName
	I0717 23:11:22.620984   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:11:22.621132   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetState
	I0717 23:11:22.622748   59773 fix.go:102] recreateIfNeeded on newest-cni-670356: state=Stopped err=<nil>
	I0717 23:11:22.622782   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	W0717 23:11:22.622988   59773 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 23:11:22.625047   59773 out.go:177] * Restarting existing kvm2 VM for "newest-cni-670356" ...
	I0717 23:11:22.626604   59773 main.go:141] libmachine: (newest-cni-670356) Calling .Start
	I0717 23:11:22.626798   59773 main.go:141] libmachine: (newest-cni-670356) Ensuring networks are active...
	I0717 23:11:22.627567   59773 main.go:141] libmachine: (newest-cni-670356) Ensuring network default is active
	I0717 23:11:22.628010   59773 main.go:141] libmachine: (newest-cni-670356) Ensuring network mk-newest-cni-670356 is active
	I0717 23:11:22.628446   59773 main.go:141] libmachine: (newest-cni-670356) Getting domain xml...
	I0717 23:11:22.629337   59773 main.go:141] libmachine: (newest-cni-670356) Creating domain...
	I0717 23:11:22.996132   59773 main.go:141] libmachine: (newest-cni-670356) Waiting to get IP...
	I0717 23:11:22.997146   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:22.997707   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:22.997761   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:22.997680   59808 retry.go:31] will retry after 305.655751ms: waiting for machine to come up
	I0717 23:11:23.305387   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:23.305929   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:23.305956   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:23.305885   59808 retry.go:31] will retry after 333.922997ms: waiting for machine to come up
	I0717 23:11:23.641357   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:23.641880   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:23.641908   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:23.641825   59808 retry.go:31] will retry after 432.075021ms: waiting for machine to come up
	I0717 23:11:24.075243   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:24.075702   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:24.075723   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:24.075655   59808 retry.go:31] will retry after 439.534593ms: waiting for machine to come up
	I0717 23:11:24.517213   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:24.517823   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:24.517852   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:24.517765   59808 retry.go:31] will retry after 627.908603ms: waiting for machine to come up
	I0717 23:11:25.147737   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:25.148261   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:25.148283   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:25.148229   59808 retry.go:31] will retry after 928.498227ms: waiting for machine to come up
	I0717 23:11:26.078423   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:26.078956   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:26.078989   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:26.078888   59808 retry.go:31] will retry after 767.818763ms: waiting for machine to come up
	I0717 23:11:26.847964   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:26.848462   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:26.848486   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:26.848415   59808 retry.go:31] will retry after 1.402983956s: waiting for machine to come up
	I0717 23:11:28.253067   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:28.253642   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:28.253665   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:28.253608   59808 retry.go:31] will retry after 1.55579605s: waiting for machine to come up
	I0717 23:11:29.810838   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:29.811264   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:29.811294   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:29.811209   59808 retry.go:31] will retry after 1.947890148s: waiting for machine to come up
	I0717 23:11:31.761369   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:31.761913   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:31.761943   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:31.761847   59808 retry.go:31] will retry after 2.444094477s: waiting for machine to come up
	I0717 23:11:34.208370   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:34.208915   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:34.208946   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:34.208848   59808 retry.go:31] will retry after 2.502840422s: waiting for machine to come up
	I0717 23:11:36.713244   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:36.713817   59773 main.go:141] libmachine: (newest-cni-670356) DBG | unable to find current IP address of domain newest-cni-670356 in network mk-newest-cni-670356
	I0717 23:11:36.713850   59773 main.go:141] libmachine: (newest-cni-670356) DBG | I0717 23:11:36.713753   59808 retry.go:31] will retry after 3.628801003s: waiting for machine to come up
	I0717 23:11:40.344947   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.345280   59773 main.go:141] libmachine: (newest-cni-670356) Found IP for machine: 192.168.50.145
	I0717 23:11:40.345316   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has current primary IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.345324   59773 main.go:141] libmachine: (newest-cni-670356) Reserving static IP address...
	I0717 23:11:40.345725   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "newest-cni-670356", mac: "52:54:00:a1:05:ad", ip: "192.168.50.145"} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:40.345763   59773 main.go:141] libmachine: (newest-cni-670356) Reserved static IP address: 192.168.50.145
	I0717 23:11:40.345779   59773 main.go:141] libmachine: (newest-cni-670356) DBG | skip adding static IP to network mk-newest-cni-670356 - found existing host DHCP lease matching {name: "newest-cni-670356", mac: "52:54:00:a1:05:ad", ip: "192.168.50.145"}
	I0717 23:11:40.345800   59773 main.go:141] libmachine: (newest-cni-670356) DBG | Getting to WaitForSSH function...
	I0717 23:11:40.345817   59773 main.go:141] libmachine: (newest-cni-670356) Waiting for SSH to be available...
	I0717 23:11:40.347615   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.347907   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:40.347940   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.348051   59773 main.go:141] libmachine: (newest-cni-670356) DBG | Using SSH client type: external
	I0717 23:11:40.348085   59773 main.go:141] libmachine: (newest-cni-670356) DBG | Using SSH private key: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356/id_rsa (-rw-------)
	I0717 23:11:40.348114   59773 main.go:141] libmachine: (newest-cni-670356) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 23:11:40.348128   59773 main.go:141] libmachine: (newest-cni-670356) DBG | About to run SSH command:
	I0717 23:11:40.348136   59773 main.go:141] libmachine: (newest-cni-670356) DBG | exit 0
	I0717 23:11:40.437561   59773 main.go:141] libmachine: (newest-cni-670356) DBG | SSH cmd err, output: <nil>: 
	I0717 23:11:40.438006   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetConfigRaw
	I0717 23:11:40.438734   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetIP
	I0717 23:11:40.441283   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.441578   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:40.441608   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.441926   59773 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/newest-cni-670356/config.json ...
	I0717 23:11:40.442215   59773 machine.go:88] provisioning docker machine ...
	I0717 23:11:40.442236   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:11:40.442464   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetMachineName
	I0717 23:11:40.442663   59773 buildroot.go:166] provisioning hostname "newest-cni-670356"
	I0717 23:11:40.442688   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetMachineName
	I0717 23:11:40.442853   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHHostname
	I0717 23:11:40.445453   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.445866   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:40.445887   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.446113   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHPort
	I0717 23:11:40.446293   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHKeyPath
	I0717 23:11:40.446408   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHKeyPath
	I0717 23:11:40.446531   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHUsername
	I0717 23:11:40.446727   59773 main.go:141] libmachine: Using SSH client type: native
	I0717 23:11:40.447141   59773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.145 22 <nil> <nil>}
	I0717 23:11:40.447155   59773 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-670356 && echo "newest-cni-670356" | sudo tee /etc/hostname
	I0717 23:11:40.583886   59773 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-670356
	
	I0717 23:11:40.583928   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHHostname
	I0717 23:11:40.586942   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.587218   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:40.587254   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.587468   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHPort
	I0717 23:11:40.587655   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHKeyPath
	I0717 23:11:40.587809   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHKeyPath
	I0717 23:11:40.587938   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHUsername
	I0717 23:11:40.588168   59773 main.go:141] libmachine: Using SSH client type: native
	I0717 23:11:40.588712   59773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.145 22 <nil> <nil>}
	I0717 23:11:40.588740   59773 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-670356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-670356/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-670356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 23:11:40.719932   59773 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 23:11:40.719996   59773 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16899-15759/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-15759/.minikube}
	I0717 23:11:40.720033   59773 buildroot.go:174] setting up certificates
	I0717 23:11:40.720048   59773 provision.go:83] configureAuth start
	I0717 23:11:40.720058   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetMachineName
	I0717 23:11:40.720411   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetIP
	I0717 23:11:40.723142   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.723566   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:40.723600   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.723776   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHHostname
	I0717 23:11:40.726219   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.726636   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:40.726684   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:40.726817   59773 provision.go:138] copyHostCerts
	I0717 23:11:40.726883   59773 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem, removing ...
	I0717 23:11:40.726896   59773 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem
	I0717 23:11:40.726986   59773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/ca.pem (1078 bytes)
	I0717 23:11:40.727108   59773 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem, removing ...
	I0717 23:11:40.727126   59773 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem
	I0717 23:11:40.727195   59773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/cert.pem (1123 bytes)
	I0717 23:11:40.727278   59773 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem, removing ...
	I0717 23:11:40.727287   59773 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem
	I0717 23:11:40.727319   59773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-15759/.minikube/key.pem (1675 bytes)
	I0717 23:11:40.727381   59773 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem org=jenkins.newest-cni-670356 san=[192.168.50.145 192.168.50.145 localhost 127.0.0.1 minikube newest-cni-670356]
	I0717 23:11:41.058368   59773 provision.go:172] copyRemoteCerts
	I0717 23:11:41.058429   59773 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 23:11:41.058454   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHHostname
	I0717 23:11:41.061422   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.061810   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:41.061838   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.062054   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHPort
	I0717 23:11:41.062305   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHKeyPath
	I0717 23:11:41.062500   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHUsername
	I0717 23:11:41.062671   59773 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356/id_rsa Username:docker}
	I0717 23:11:41.151285   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 23:11:41.177713   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 23:11:41.203651   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 23:11:41.229618   59773 provision.go:86] duration metric: configureAuth took 509.558948ms
	I0717 23:11:41.229644   59773 buildroot.go:189] setting minikube options for container-runtime
	I0717 23:11:41.229805   59773 config.go:182] Loaded profile config "newest-cni-670356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:11:41.229871   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHHostname
	I0717 23:11:41.232819   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.233106   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:41.233149   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.233297   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHPort
	I0717 23:11:41.233509   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHKeyPath
	I0717 23:11:41.233711   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHKeyPath
	I0717 23:11:41.233843   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHUsername
	I0717 23:11:41.233986   59773 main.go:141] libmachine: Using SSH client type: native
	I0717 23:11:41.234582   59773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.145 22 <nil> <nil>}
	I0717 23:11:41.234608   59773 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 23:11:41.592771   59773 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 23:11:41.592804   59773 machine.go:91] provisioned docker machine in 1.150574703s
	I0717 23:11:41.592816   59773 start.go:300] post-start starting for "newest-cni-670356" (driver="kvm2")
	I0717 23:11:41.592828   59773 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 23:11:41.592851   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:11:41.593192   59773 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 23:11:41.593229   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHHostname
	I0717 23:11:41.595916   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.596399   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:41.596433   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.596681   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHPort
	I0717 23:11:41.596899   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHKeyPath
	I0717 23:11:41.597040   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHUsername
	I0717 23:11:41.597158   59773 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356/id_rsa Username:docker}
	I0717 23:11:41.691822   59773 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 23:11:41.696515   59773 info.go:137] Remote host: Buildroot 2021.02.12
	I0717 23:11:41.696536   59773 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/addons for local assets ...
	I0717 23:11:41.696594   59773 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-15759/.minikube/files for local assets ...
	I0717 23:11:41.696661   59773 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem -> 229902.pem in /etc/ssl/certs
	I0717 23:11:41.696746   59773 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 23:11:41.705109   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /etc/ssl/certs/229902.pem (1708 bytes)
	I0717 23:11:41.731055   59773 start.go:303] post-start completed in 138.223424ms
	I0717 23:11:41.731082   59773 fix.go:56] fixHost completed within 19.12637065s
	I0717 23:11:41.731102   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHHostname
	I0717 23:11:41.733732   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.734100   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:41.734138   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.734292   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHPort
	I0717 23:11:41.734502   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHKeyPath
	I0717 23:11:41.734704   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHKeyPath
	I0717 23:11:41.734854   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHUsername
	I0717 23:11:41.735033   59773 main.go:141] libmachine: Using SSH client type: native
	I0717 23:11:41.735633   59773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 192.168.50.145 22 <nil> <nil>}
	I0717 23:11:41.735651   59773 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 23:11:41.854745   59773 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689635501.798135185
	
	I0717 23:11:41.854768   59773 fix.go:206] guest clock: 1689635501.798135185
	I0717 23:11:41.854777   59773 fix.go:219] Guest: 2023-07-17 23:11:41.798135185 +0000 UTC Remote: 2023-07-17 23:11:41.731085947 +0000 UTC m=+19.271857826 (delta=67.049238ms)
	I0717 23:11:41.854798   59773 fix.go:190] guest clock delta is within tolerance: 67.049238ms
	I0717 23:11:41.854802   59773 start.go:83] releasing machines lock for "newest-cni-670356", held for 19.250111309s
	I0717 23:11:41.854818   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:11:41.855082   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetIP
	I0717 23:11:41.858000   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.858415   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:41.858449   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.858585   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:11:41.859069   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:11:41.859241   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:11:41.859313   59773 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 23:11:41.859364   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHHostname
	I0717 23:11:41.859481   59773 ssh_runner.go:195] Run: cat /version.json
	I0717 23:11:41.859507   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHHostname
	I0717 23:11:41.862250   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.862543   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.862713   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:41.862744   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.862903   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:41.862926   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:41.862930   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHPort
	I0717 23:11:41.863091   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHPort
	I0717 23:11:41.863165   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHKeyPath
	I0717 23:11:41.863242   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHKeyPath
	I0717 23:11:41.863298   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHUsername
	I0717 23:11:41.863367   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHUsername
	I0717 23:11:41.863423   59773 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356/id_rsa Username:docker}
	I0717 23:11:41.863489   59773 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/newest-cni-670356/id_rsa Username:docker}
	I0717 23:11:41.970006   59773 ssh_runner.go:195] Run: systemctl --version
	I0717 23:11:41.976344   59773 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 23:11:42.123470   59773 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 23:11:42.132740   59773 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 23:11:42.132804   59773 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 23:11:42.151172   59773 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 23:11:42.151198   59773 start.go:466] detecting cgroup driver to use...
	I0717 23:11:42.151276   59773 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 23:11:42.166483   59773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 23:11:42.179597   59773 docker.go:196] disabling cri-docker service (if available) ...
	I0717 23:11:42.179661   59773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 23:11:42.193088   59773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 23:11:42.206088   59773 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 23:11:42.307917   59773 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 23:11:42.438367   59773 docker.go:212] disabling docker service ...
	I0717 23:11:42.438433   59773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 23:11:42.453007   59773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 23:11:42.465758   59773 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 23:11:42.601556   59773 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 23:11:42.732306   59773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 23:11:42.746088   59773 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 23:11:42.764517   59773 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 23:11:42.764590   59773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 23:11:42.775280   59773 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 23:11:42.775336   59773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 23:11:42.786796   59773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 23:11:42.798876   59773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 23:11:42.810144   59773 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 23:11:42.820948   59773 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 23:11:42.830422   59773 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 23:11:42.830487   59773 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 23:11:42.845481   59773 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 23:11:42.855946   59773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 23:11:42.973398   59773 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 23:11:43.153937   59773 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 23:11:43.154006   59773 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 23:11:43.161978   59773 start.go:534] Will wait 60s for crictl version
	I0717 23:11:43.162053   59773 ssh_runner.go:195] Run: which crictl
	I0717 23:11:43.167447   59773 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 23:11:43.204553   59773 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0717 23:11:43.204641   59773 ssh_runner.go:195] Run: crio --version
	I0717 23:11:43.264422   59773 ssh_runner.go:195] Run: crio --version
	I0717 23:11:43.313410   59773 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.1 ...
	I0717 23:11:43.314897   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetIP
	I0717 23:11:43.317864   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:43.318194   59773 main.go:141] libmachine: (newest-cni-670356) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:05:ad", ip: ""} in network mk-newest-cni-670356: {Iface:virbr2 ExpiryTime:2023-07-18 00:11:34 +0000 UTC Type:0 Mac:52:54:00:a1:05:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:newest-cni-670356 Clientid:01:52:54:00:a1:05:ad}
	I0717 23:11:43.318223   59773 main.go:141] libmachine: (newest-cni-670356) DBG | domain newest-cni-670356 has defined IP address 192.168.50.145 and MAC address 52:54:00:a1:05:ad in network mk-newest-cni-670356
	I0717 23:11:43.318424   59773 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 23:11:43.323873   59773 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 23:11:43.340448   59773 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0717 23:11:43.342012   59773 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 23:11:43.342100   59773 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 23:11:43.379633   59773 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 23:11:43.379688   59773 ssh_runner.go:195] Run: which lz4
	I0717 23:11:43.384029   59773 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 23:11:43.388377   59773 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 23:11:43.388410   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 23:11:45.216822   59773 crio.go:444] Took 1.832827 seconds to copy over tarball
	I0717 23:11:45.216922   59773 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 23:11:48.136750   59773 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.919804256s)
	I0717 23:11:48.136781   59773 crio.go:451] Took 2.919934 seconds to extract the tarball
	I0717 23:11:48.136792   59773 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 23:11:48.182148   59773 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 23:11:48.232383   59773 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 23:11:48.232410   59773 cache_images.go:84] Images are preloaded, skipping loading
	I0717 23:11:48.232480   59773 ssh_runner.go:195] Run: crio config
	I0717 23:11:48.298251   59773 cni.go:84] Creating CNI manager for ""
	I0717 23:11:48.298280   59773 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 23:11:48.298294   59773 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0717 23:11:48.298317   59773 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.145 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-670356 NodeName:newest-cni-670356 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.50.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 23:11:48.298469   59773 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.145
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-670356"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 23:11:48.298530   59773 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-670356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:newest-cni-670356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 23:11:48.298579   59773 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 23:11:48.308915   59773 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 23:11:48.308980   59773 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 23:11:48.319343   59773 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (414 bytes)
	I0717 23:11:48.337368   59773 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 23:11:48.354350   59773 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0717 23:11:48.373298   59773 ssh_runner.go:195] Run: grep 192.168.50.145	control-plane.minikube.internal$ /etc/hosts
	I0717 23:11:48.377549   59773 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 23:11:48.390496   59773 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/newest-cni-670356 for IP: 192.168.50.145
	I0717 23:11:48.390525   59773 certs.go:190] acquiring lock for shared ca certs: {Name:mk358cdd8ffcf2f8ada4337c2bb687d932ef5afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:11:48.390706   59773 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key
	I0717 23:11:48.390792   59773 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key
	I0717 23:11:48.390874   59773 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/newest-cni-670356/client.key
	I0717 23:11:48.390942   59773 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/newest-cni-670356/apiserver.key.ec38a47d
	I0717 23:11:48.390996   59773 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/newest-cni-670356/proxy-client.key
	I0717 23:11:48.391117   59773 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem (1338 bytes)
	W0717 23:11:48.391149   59773 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990_empty.pem, impossibly tiny 0 bytes
	I0717 23:11:48.391157   59773 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 23:11:48.391182   59773 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/ca.pem (1078 bytes)
	I0717 23:11:48.391229   59773 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/cert.pem (1123 bytes)
	I0717 23:11:48.391251   59773 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/certs/home/jenkins/minikube-integration/16899-15759/.minikube/certs/key.pem (1675 bytes)
	I0717 23:11:48.391291   59773 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem (1708 bytes)
	I0717 23:11:48.391828   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/newest-cni-670356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 23:11:48.418407   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/newest-cni-670356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 23:11:48.442205   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/newest-cni-670356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 23:11:48.467396   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/newest-cni-670356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 23:11:48.494010   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 23:11:48.518879   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 23:11:48.542688   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 23:11:48.568035   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 23:11:48.594300   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 23:11:48.619454   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/certs/22990.pem --> /usr/share/ca-certificates/22990.pem (1338 bytes)
	I0717 23:11:48.645405   59773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/ssl/certs/229902.pem --> /usr/share/ca-certificates/229902.pem (1708 bytes)
	I0717 23:11:48.670913   59773 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 23:11:48.690013   59773 ssh_runner.go:195] Run: openssl version
	I0717 23:11:48.695972   59773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 23:11:48.712534   59773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 23:11:48.718171   59773 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:41 /usr/share/ca-certificates/minikubeCA.pem
	I0717 23:11:48.718237   59773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 23:11:48.726744   59773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 23:11:48.738686   59773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22990.pem && ln -fs /usr/share/ca-certificates/22990.pem /etc/ssl/certs/22990.pem"
	I0717 23:11:48.751576   59773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22990.pem
	I0717 23:11:48.758001   59773 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:49 /usr/share/ca-certificates/22990.pem
	I0717 23:11:48.758050   59773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22990.pem
	I0717 23:11:48.764407   59773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22990.pem /etc/ssl/certs/51391683.0"
	I0717 23:11:48.777000   59773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229902.pem && ln -fs /usr/share/ca-certificates/229902.pem /etc/ssl/certs/229902.pem"
	I0717 23:11:48.788497   59773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229902.pem
	I0717 23:11:48.793270   59773 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:49 /usr/share/ca-certificates/229902.pem
	I0717 23:11:48.793343   59773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229902.pem
	I0717 23:11:48.799404   59773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/229902.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 23:11:48.811337   59773 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 23:11:48.815950   59773 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 23:11:48.821884   59773 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 23:11:48.828372   59773 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 23:11:48.835517   59773 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 23:11:48.841755   59773 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 23:11:48.847767   59773 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 23:11:48.853708   59773 kubeadm.go:404] StartCluster: {Name:newest-cni-670356 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-670356 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.145 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:11:48.853822   59773 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 23:11:48.853886   59773 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 23:11:48.891988   59773 cri.go:89] found id: ""
	I0717 23:11:48.892063   59773 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 23:11:48.906458   59773 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 23:11:48.906478   59773 kubeadm.go:636] restartCluster start
	I0717 23:11:48.906531   59773 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 23:11:48.920896   59773 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:48.922083   59773 kubeconfig.go:135] verify returned: extract IP: "newest-cni-670356" does not appear in /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 23:11:48.922760   59773 kubeconfig.go:146] "newest-cni-670356" context is missing from /home/jenkins/minikube-integration/16899-15759/kubeconfig - will repair!
	I0717 23:11:48.923934   59773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:11:48.962046   59773 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 23:11:48.973480   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:48.973672   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:48.985585   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:49.486004   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:49.486100   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:49.499057   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:49.986728   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:49.986842   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:50.001200   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:50.486749   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:50.486830   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:50.501405   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:50.985893   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:50.986006   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:50.998897   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:51.486602   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:51.486730   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:51.499800   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:51.986475   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:51.986545   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:52.000412   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:52.486275   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:52.486340   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:52.500750   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:52.986498   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:52.986575   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:52.999695   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:53.486330   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:53.486421   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:53.499169   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:53.986728   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:53.986799   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:54.000124   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:54.485663   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:54.485753   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:54.499644   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:54.985765   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:54.985858   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:54.998960   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:55.486638   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:55.486705   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:55.500685   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:55.986411   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:55.986501   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:55.999684   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:56.486146   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:56.486214   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:56.499668   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:56.986295   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:56.986385   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:57.000257   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:57.485787   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:57.485870   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:57.503406   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:57.986383   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:57.986473   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:58.000261   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:58.486700   59773 api_server.go:166] Checking apiserver status ...
	I0717 23:11:58.486766   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 23:11:58.500443   59773 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 23:11:58.974174   59773 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 23:11:58.974223   59773 kubeadm.go:1128] stopping kube-system containers ...
	I0717 23:11:58.974236   59773 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 23:11:58.974297   59773 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 23:11:59.010130   59773 cri.go:89] found id: ""
	I0717 23:11:59.010206   59773 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 23:11:59.027884   59773 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 23:11:59.039864   59773 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 23:11:59.039946   59773 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 23:11:59.052370   59773 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 23:11:59.052399   59773 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 23:11:59.177373   59773 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 23:12:00.004035   59773 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 23:12:00.192327   59773 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 23:12:00.299082   59773 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 23:12:00.379995   59773 api_server.go:52] waiting for apiserver process to appear ...
	I0717 23:12:00.380079   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:12:00.895646   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:12:01.395835   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:12:01.895657   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:12:02.395636   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:12:02.895125   59773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 23:12:02.923771   59773 api_server.go:72] duration metric: took 2.543777563s to wait for apiserver process to appear ...
	I0717 23:12:02.923801   59773 api_server.go:88] waiting for apiserver healthz status ...
	I0717 23:12:02.923820   59773 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8443/healthz ...
	I0717 23:12:07.269422   59773 api_server.go:279] https://192.168.50.145:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 23:12:07.269457   59773 api_server.go:103] status: https://192.168.50.145:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 23:12:07.770003   59773 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8443/healthz ...
	I0717 23:12:07.776497   59773 api_server.go:279] https://192.168.50.145:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 23:12:07.776529   59773 api_server.go:103] status: https://192.168.50.145:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 23:12:08.269828   59773 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8443/healthz ...
	I0717 23:12:08.280956   59773 api_server.go:279] https://192.168.50.145:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 23:12:08.280985   59773 api_server.go:103] status: https://192.168.50.145:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 23:12:08.769550   59773 api_server.go:253] Checking apiserver healthz at https://192.168.50.145:8443/healthz ...
	I0717 23:12:08.775708   59773 api_server.go:279] https://192.168.50.145:8443/healthz returned 200:
	ok
	I0717 23:12:08.788105   59773 api_server.go:141] control plane version: v1.27.3
	I0717 23:12:08.788137   59773 api_server.go:131] duration metric: took 5.864326223s to wait for apiserver health ...
	I0717 23:12:08.788147   59773 cni.go:84] Creating CNI manager for ""
	I0717 23:12:08.788155   59773 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 23:12:08.790430   59773 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 23:12:08.792095   59773 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 23:12:08.811711   59773 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0717 23:12:08.833118   59773 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 23:12:08.843661   59773 system_pods.go:59] 8 kube-system pods found
	I0717 23:12:08.843692   59773 system_pods.go:61] "coredns-5d78c9869d-twcq4" [a95aa4a9-9255-4e37-875b-4619be55987c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 23:12:08.843701   59773 system_pods.go:61] "etcd-newest-cni-670356" [96fc8a63-f191-4a11-a0c6-55e27bd5767d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 23:12:08.843709   59773 system_pods.go:61] "kube-apiserver-newest-cni-670356" [6362a945-ab78-4f58-9750-f060d525392b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 23:12:08.843718   59773 system_pods.go:61] "kube-controller-manager-newest-cni-670356" [5191c99d-62f6-4a9d-ab22-b165c7e93302] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 23:12:08.843728   59773 system_pods.go:61] "kube-proxy-r62qd" [e1d99339-b233-4835-92fd-2d07ea7462be] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 23:12:08.843738   59773 system_pods.go:61] "kube-scheduler-newest-cni-670356" [c50a4262-7890-4d2c-876f-e58cc9268431] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 23:12:08.843757   59773 system_pods.go:61] "metrics-server-74d5c6b9c-772qq" [dc3b5b5d-44f7-4233-bc27-7ffd8e408f8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 23:12:08.843765   59773 system_pods.go:61] "storage-provisioner" [0925433d-19d9-40ec-9a85-e1939e0eb5e0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 23:12:08.843774   59773 system_pods.go:74] duration metric: took 10.635109ms to wait for pod list to return data ...
	I0717 23:12:08.843784   59773 node_conditions.go:102] verifying NodePressure condition ...
	I0717 23:12:08.848381   59773 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0717 23:12:08.848417   59773 node_conditions.go:123] node cpu capacity is 2
	I0717 23:12:08.848430   59773 node_conditions.go:105] duration metric: took 4.641067ms to run NodePressure ...
	I0717 23:12:08.848452   59773 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 23:12:09.104340   59773 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 23:12:09.119735   59773 ops.go:34] apiserver oom_adj: -16
	I0717 23:12:09.119754   59773 kubeadm.go:640] restartCluster took 20.213270036s
	I0717 23:12:09.119761   59773 kubeadm.go:406] StartCluster complete in 20.266071979s
	I0717 23:12:09.119775   59773 settings.go:142] acquiring lock: {Name:mk9be0d05c943f5010cb1ca9690e7c6a87f18950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:12:09.119854   59773 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 23:12:09.121625   59773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/kubeconfig: {Name:mk1295c692902c2f497638c9fc1fd126fdb1a2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:12:09.121930   59773 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 23:12:09.122026   59773 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 23:12:09.122132   59773 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-670356"
	I0717 23:12:09.122152   59773 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-670356"
	W0717 23:12:09.122160   59773 addons.go:240] addon storage-provisioner should already be in state true
	I0717 23:12:09.122198   59773 addons.go:69] Setting metrics-server=true in profile "newest-cni-670356"
	I0717 23:12:09.122229   59773 addons.go:231] Setting addon metrics-server=true in "newest-cni-670356"
	I0717 23:12:09.122229   59773 config.go:182] Loaded profile config "newest-cni-670356": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	W0717 23:12:09.122240   59773 addons.go:240] addon metrics-server should already be in state true
	I0717 23:12:09.122229   59773 host.go:66] Checking if "newest-cni-670356" exists ...
	I0717 23:12:09.122226   59773 addons.go:69] Setting dashboard=true in profile "newest-cni-670356"
	I0717 23:12:09.122293   59773 host.go:66] Checking if "newest-cni-670356" exists ...
	I0717 23:12:09.122305   59773 addons.go:231] Setting addon dashboard=true in "newest-cni-670356"
	W0717 23:12:09.122320   59773 addons.go:240] addon dashboard should already be in state true
	I0717 23:12:09.122356   59773 host.go:66] Checking if "newest-cni-670356" exists ...
	I0717 23:12:09.122639   59773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 23:12:09.122687   59773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 23:12:09.122712   59773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 23:12:09.122691   59773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 23:12:09.122735   59773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 23:12:09.122743   59773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 23:12:09.122817   59773 addons.go:69] Setting default-storageclass=true in profile "newest-cni-670356"
	I0717 23:12:09.122837   59773 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-670356"
	I0717 23:12:09.123369   59773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 23:12:09.123410   59773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 23:12:09.129894   59773 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-670356" context rescaled to 1 replicas
	I0717 23:12:09.129929   59773 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.145 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 23:12:09.131881   59773 out.go:177] * Verifying Kubernetes components...
	I0717 23:12:09.133343   59773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 23:12:09.141091   59773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0717 23:12:09.141551   59773 main.go:141] libmachine: () Calling .GetVersion
	I0717 23:12:09.141693   59773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37557
	I0717 23:12:09.141709   59773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I0717 23:12:09.141799   59773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32837
	I0717 23:12:09.142133   59773 main.go:141] libmachine: Using API Version  1
	I0717 23:12:09.142161   59773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 23:12:09.142174   59773 main.go:141] libmachine: () Calling .GetVersion
	I0717 23:12:09.142225   59773 main.go:141] libmachine: () Calling .GetVersion
	I0717 23:12:09.142275   59773 main.go:141] libmachine: () Calling .GetVersion
	I0717 23:12:09.142578   59773 main.go:141] libmachine: () Calling .GetMachineName
	I0717 23:12:09.142735   59773 main.go:141] libmachine: Using API Version  1
	I0717 23:12:09.142738   59773 main.go:141] libmachine: Using API Version  1
	I0717 23:12:09.142743   59773 main.go:141] libmachine: Using API Version  1
	I0717 23:12:09.142750   59773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 23:12:09.142756   59773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 23:12:09.142775   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetState
	I0717 23:12:09.142761   59773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 23:12:09.143111   59773 main.go:141] libmachine: () Calling .GetMachineName
	I0717 23:12:09.143114   59773 main.go:141] libmachine: () Calling .GetMachineName
	I0717 23:12:09.143115   59773 main.go:141] libmachine: () Calling .GetMachineName
	I0717 23:12:09.143593   59773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 23:12:09.143629   59773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 23:12:09.144086   59773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 23:12:09.144168   59773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 23:12:09.144212   59773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 23:12:09.144264   59773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 23:12:09.163995   59773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I0717 23:12:09.164404   59773 main.go:141] libmachine: () Calling .GetVersion
	I0717 23:12:09.164753   59773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I0717 23:12:09.165033   59773 main.go:141] libmachine: Using API Version  1
	I0717 23:12:09.165061   59773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 23:12:09.165093   59773 main.go:141] libmachine: () Calling .GetVersion
	I0717 23:12:09.165425   59773 main.go:141] libmachine: () Calling .GetMachineName
	I0717 23:12:09.165601   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetState
	I0717 23:12:09.165735   59773 main.go:141] libmachine: Using API Version  1
	I0717 23:12:09.165747   59773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 23:12:09.166090   59773 main.go:141] libmachine: () Calling .GetMachineName
	I0717 23:12:09.166346   59773 addons.go:231] Setting addon default-storageclass=true in "newest-cni-670356"
	W0717 23:12:09.166360   59773 addons.go:240] addon default-storageclass should already be in state true
	I0717 23:12:09.166387   59773 host.go:66] Checking if "newest-cni-670356" exists ...
	I0717 23:12:09.166754   59773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 23:12:09.166782   59773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 23:12:09.167011   59773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41307
	I0717 23:12:09.167037   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetState
	I0717 23:12:09.167582   59773 main.go:141] libmachine: () Calling .GetVersion
	I0717 23:12:09.167915   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:12:09.168080   59773 main.go:141] libmachine: Using API Version  1
	I0717 23:12:09.168092   59773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 23:12:09.172294   59773 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0717 23:12:09.168654   59773 main.go:141] libmachine: () Calling .GetMachineName
	I0717 23:12:09.169375   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:12:09.175487   59773 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0717 23:12:09.174201   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetState
	I0717 23:12:09.176894   59773 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 23:12:09.178365   59773 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 23:12:09.178387   59773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 23:12:09.178408   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHHostname
	I0717 23:12:09.176933   59773 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0717 23:12:09.178456   59773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0717 23:12:09.178468   59773 main.go:141] libmachine: (newest-cni-670356) Calling .GetSSHHostname
	I0717 23:12:09.183296   59773 main.go:141] libmachine: (newest-cni-670356) Calling .DriverName
	I0717 23:12:09.185211   59773 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 22:50:21 UTC, ends at Mon 2023-07-17 23:12:10 UTC. --
	Jul 17 23:12:09 embed-certs-571296 crio[726]: time="2023-07-17 23:12:09.599586512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=529fe294-4aab-441e-8b3f-5e9380a6ef63 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.045460275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c6efa5d7-5f81-4de1-9009-627e9e37cb3c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.045581644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c6efa5d7-5f81-4de1-9009-627e9e37cb3c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.045914397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c6efa5d7-5f81-4de1-9009-627e9e37cb3c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.088389582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=88e35345-f283-49d9-9af7-a1b278b2d511 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.088486933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=88e35345-f283-49d9-9af7-a1b278b2d511 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.088650621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=88e35345-f283-49d9-9af7-a1b278b2d511 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.134887571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2637bedb-aec5-44fc-9086-ca6e1e499f83 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.134969149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2637bedb-aec5-44fc-9086-ca6e1e499f83 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.135193801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2637bedb-aec5-44fc-9086-ca6e1e499f83 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.171601002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0c506220-b702-4a3c-8151-ea78dc656009 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.171779188Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0c506220-b702-4a3c-8151-ea78dc656009 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.171970748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0c506220-b702-4a3c-8151-ea78dc656009 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.218115197Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1cefe3a5-2947-48d3-96dc-260a4be4e1ce name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.218204114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1cefe3a5-2947-48d3-96dc-260a4be4e1ce name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.218404672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1cefe3a5-2947-48d3-96dc-260a4be4e1ce name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.269254740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c5fefb6f-1fd6-4c61-a13a-5a4ad2b36654 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.269347853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c5fefb6f-1fd6-4c61-a13a-5a4ad2b36654 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.269506651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c5fefb6f-1fd6-4c61-a13a-5a4ad2b36654 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.309183928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=faa53b20-ed38-4aad-bde3-fc954dd89143 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.309288626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=faa53b20-ed38-4aad-bde3-fc954dd89143 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.309448245Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=faa53b20-ed38-4aad-bde3-fc954dd89143 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.351386676Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9db6aed0-b54b-43da-bf94-8c84f0586a0a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.351508558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9db6aed0-b54b-43da-bf94-8c84f0586a0a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:10 embed-certs-571296 crio[726]: time="2023-07-17 23:12:10.351859094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87,PodSandboxId:a1fdf463e93d493f74f497e404b8ec209b736f04158d0c99708a6fb478e8fb10,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634572503605142,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1138e736-ef8d-4d24-86d5-cac3f58f0dd6,},Annotations:map[string]string{io.kubernetes.container.hash: 9216bcf5,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea,PodSandboxId:d2124bc3946dd019f989686bdee8a7ddd82f784bdf256b334578840cebc4efbb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634572090932565,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xjpds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c074cca-2579-4a54-bf55-77bba0fbcd34,},Annotations:map[string]string{io.kubernetes.container.hash: e0f8977a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594,PodSandboxId:af4c08b491f1a72730e8e2b9d03812969712dc2acd8f15ba60c34e89387af9b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634570774570131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-6ljtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9488690c-8407-42ce-9938-039af0fa2c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d223da52,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2,PodSandboxId:dcb3735fb7b702d080ccb4bb353ff5a0d55ded47eb1194e3caeec02b27533ff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634547227044289,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa165bb58267ff6ce3707ef1dedee02,},An
notations:map[string]string{io.kubernetes.container.hash: d0943c0e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e,PodSandboxId:3270b4be80d3c39e3453f220743eb87fdde7e4d2d790fcd7ab9584e05ce0503e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634546575855141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acb9fa0b62d329874dd103234a29c78,},Annotations:
map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135,PodSandboxId:ff813770ad9d8ec8e086b632a116328e56fcd1cf9de7f93925bb90210d35e709,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634546471760854,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a4e2aae0879e1b095ac
b61dc1b0b9b,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c,PodSandboxId:a5d980dbc6cbe369536c5153779275ee04e2336538ae5493c410aafbceeb3eb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634546262114470,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-571296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57393fc23d7ddbdd6adf8d339e89ea7
3,},Annotations:map[string]string{io.kubernetes.container.hash: 601f9b32,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9db6aed0-b54b-43da-bf94-8c84f0586a0a name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	9c19e84545ef3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   a1fdf463e93d4
	5768c3f6c2960       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   15 minutes ago      Running             kube-proxy                0                   d2124bc3946dd
	828166d2e045a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   af4c08b491f1a
	e899989fdd5cd       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   16 minutes ago      Running             etcd                      2                   dcb3735fb7b70
	a818326b40b2f       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   16 minutes ago      Running             kube-scheduler            2                   3270b4be80d3c
	0272ac3812d33       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   16 minutes ago      Running             kube-controller-manager   2                   ff813770ad9d8
	50fe7f6b0feef       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   16 minutes ago      Running             kube-apiserver            2                   a5d980dbc6cbe
	
	* 
	* ==> coredns [828166d2e045a7ad90287f3c679c66e2b498181815a84b4d8a4ff6e6ca6aa594] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37070 - 25273 "HINFO IN 7417818828265277478.6923989445757744740. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017827035s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-571296
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-571296
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=embed-certs-571296
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_55_55_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:55:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-571296
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 23:12:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 23:11:34 +0000   Mon, 17 Jul 2023 22:55:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 23:11:34 +0000   Mon, 17 Jul 2023 22:55:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 23:11:34 +0000   Mon, 17 Jul 2023 22:55:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 23:11:34 +0000   Mon, 17 Jul 2023 22:56:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.179
	  Hostname:    embed-certs-571296
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f599878ef444243a720c3dbd0b0a67a
	  System UUID:                5f599878-ef44-4243-a720-c3dbd0b0a67a
	  Boot ID:                    305230bc-a94e-4ef4-82b6-56fed7cc0a51
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-6ljtn                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-embed-certs-571296                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-571296             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-571296    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-xjpds                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-571296             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-74d5c6b9c-cknmm                100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-571296 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-571296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-571296 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-571296 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-571296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-571296 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m                kubelet          Node embed-certs-571296 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m                kubelet          Node embed-certs-571296 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-571296 event: Registered Node embed-certs-571296 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 22:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070530] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.346961] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.481208] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142396] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.432217] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.000505] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.115036] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.141145] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.116206] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.210624] systemd-fstab-generator[709]: Ignoring "noauto" for root device
	[ +17.116418] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[Jul17 22:51] kauditd_printk_skb: 29 callbacks suppressed
	[ +25.214063] hrtimer: interrupt took 6366582 ns
	[Jul17 22:55] systemd-fstab-generator[3542]: Ignoring "noauto" for root device
	[  +9.833746] systemd-fstab-generator[3861]: Ignoring "noauto" for root device
	[Jul17 22:56] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [e899989fdd5cdad29184d8b4e191bb671f0ce218da04da08b1986242e7ee41e2] <==
	* {"level":"info","ts":"2023-07-17T22:55:49.589Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"be5c98cbd915062","local-member-id":"564c1a3a64ab9e7c","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:55:49.589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:55:49.589Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:55:49.589Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:55:49.590Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T22:55:49.591Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.61.179:2379"}
	{"level":"info","ts":"2023-07-17T22:55:49.591Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:55:49.591Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T23:05:49.626Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":725}
	{"level":"info","ts":"2023-07-17T23:05:49.634Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":725,"took":"6.973134ms","hash":3749277493}
	{"level":"info","ts":"2023-07-17T23:05:49.636Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3749277493,"revision":725,"compact-revision":-1}
	{"level":"info","ts":"2023-07-17T23:10:40.224Z","caller":"traceutil/trace.go:171","msg":"trace[1657013928] transaction","detail":"{read_only:false; response_revision:1204; number_of_response:1; }","duration":"227.901234ms","start":"2023-07-17T23:10:39.996Z","end":"2023-07-17T23:10:40.224Z","steps":["trace[1657013928] 'process raft request'  (duration: 227.702639ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:10:40.450Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.247403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T23:10:40.450Z","caller":"traceutil/trace.go:171","msg":"trace[1203969798] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1204; }","duration":"138.557802ms","start":"2023-07-17T23:10:40.311Z","end":"2023-07-17T23:10:40.450Z","steps":["trace[1203969798] 'range keys from in-memory index tree'  (duration: 138.041663ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:10:49.645Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":968}
	{"level":"info","ts":"2023-07-17T23:10:49.646Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":968,"took":"1.31847ms","hash":3727615681}
	{"level":"info","ts":"2023-07-17T23:10:49.646Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3727615681,"revision":968,"compact-revision":725}
	{"level":"info","ts":"2023-07-17T23:11:49.544Z","caller":"traceutil/trace.go:171","msg":"trace[1026859183] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"850.436374ms","start":"2023-07-17T23:11:48.694Z","end":"2023-07-17T23:11:49.544Z","steps":["trace[1026859183] 'process raft request'  (duration: 850.094878ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:11:49.546Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T23:11:48.694Z","time spent":"851.047025ms","remote":"127.0.0.1:60032","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1260 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-07-17T23:11:49.981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"405.927199ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T23:11:49.981Z","caller":"traceutil/trace.go:171","msg":"trace[336017519] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1261; }","duration":"406.184412ms","start":"2023-07-17T23:11:49.575Z","end":"2023-07-17T23:11:49.981Z","steps":["trace[336017519] 'range keys from in-memory index tree'  (duration: 405.89622ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:11:49.981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"317.292105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-07-17T23:11:49.981Z","caller":"traceutil/trace.go:171","msg":"trace[475674760] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:0; response_revision:1261; }","duration":"317.834573ms","start":"2023-07-17T23:11:49.664Z","end":"2023-07-17T23:11:49.981Z","steps":["trace[475674760] 'count revisions from in-memory index tree'  (duration: 317.093111ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:11:49.981Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T23:11:49.664Z","time spent":"317.932137ms","remote":"127.0.0.1:60074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":13,"response size":29,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true "}
	{"level":"info","ts":"2023-07-17T23:11:50.609Z","caller":"traceutil/trace.go:171","msg":"trace[87092295] transaction","detail":"{read_only:false; response_revision:1262; number_of_response:1; }","duration":"113.554508ms","start":"2023-07-17T23:11:50.496Z","end":"2023-07-17T23:11:50.609Z","steps":["trace[87092295] 'process raft request'  (duration: 113.28823ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:12:10 up 21 min,  0 users,  load average: 0.20, 0.33, 0.30
	Linux embed-certs-571296 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [50fe7f6b0feefda8b40dbe6dc40757fb9e7057db83edbbce68c184ca86f9321c] <==
	* I0717 23:09:51.161268       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 23:10:51.160918       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.165.138:443: connect: connection refused
	I0717 23:10:51.161031       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 23:10:51.279900       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.165.138:443: connect: connection refused
	I0717 23:10:51.280012       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 23:10:52.280586       1 handler_proxy.go:100] no RequestInfo found in the context
	W0717 23:10:52.280713       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:10:52.281045       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:10:52.281110       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0717 23:10:52.281175       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:10:52.282516       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:11:49.546968       1 trace.go:219] Trace[249989581]: "Update" accept:application/json, */*,audit-id:afb9c4df-1476-4c60-af54-08e694cba348,client:192.168.61.179,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (17-Jul-2023 23:11:48.691) (total time: 855ms):
	Trace[249989581]: ["GuaranteedUpdate etcd3" audit-id:afb9c4df-1476-4c60-af54-08e694cba348,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 854ms (23:11:48.692)
	Trace[249989581]:  ---"Txn call completed" 853ms (23:11:49.546)]
	Trace[249989581]: [855.045552ms] [855.045552ms] END
	I0717 23:11:51.160377       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.96.165.138:443: connect: connection refused
	I0717 23:11:51.160488       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 23:11:52.281725       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:11:52.281888       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:11:52.281937       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:11:52.282904       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:11:52.283005       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:11:52.283110       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [0272ac3812d332375d9dfbeb2079ee481cfc8fe59c4a3193e358be06f41d7135] <==
	* W0717 23:06:07.648884       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:06:37.111339       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:06:37.666881       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:07:07.119753       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:07:07.676435       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:07:37.126081       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:07:37.687075       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:08:07.133069       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:08:07.695423       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:08:37.138869       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:08:37.708808       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:09:07.155410       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:09:07.718859       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:09:37.161334       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:09:37.729851       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:10:07.168220       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:10:07.738629       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:10:37.174079       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:10:37.752136       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:11:07.182591       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:11:07.768099       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:11:37.187836       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:11:37.777794       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:12:07.199096       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:12:07.790017       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [5768c3f6c2960eedc229b4b11c38f4848fed62aa3f041907f90824dd411d19ea] <==
	* I0717 22:56:12.753578       1 node.go:141] Successfully retrieved node IP: 192.168.61.179
	I0717 22:56:12.754254       1 server_others.go:110] "Detected node IP" address="192.168.61.179"
	I0717 22:56:12.754470       1 server_others.go:554] "Using iptables proxy"
	I0717 22:56:12.797196       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 22:56:12.797282       1 server_others.go:192] "Using iptables Proxier"
	I0717 22:56:12.798052       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 22:56:12.799347       1 server.go:658] "Version info" version="v1.27.3"
	I0717 22:56:12.799411       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:56:12.803398       1 config.go:188] "Starting service config controller"
	I0717 22:56:12.805010       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 22:56:12.805347       1 config.go:97] "Starting endpoint slice config controller"
	I0717 22:56:12.805387       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 22:56:12.811782       1 config.go:315] "Starting node config controller"
	I0717 22:56:12.812037       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 22:56:12.905595       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 22:56:12.905753       1 shared_informer.go:318] Caches are synced for service config
	I0717 22:56:12.912361       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [a818326b40b2f72f790c7d9a2fd0da857e53ca453f97038471e7f0d903be955e] <==
	* W0717 22:55:51.345847       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:55:51.346746       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 22:55:52.169072       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:55:52.169237       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 22:55:52.279187       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:55:52.279279       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 22:55:52.340312       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:55:52.340426       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 22:55:52.363615       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:55:52.363783       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 22:55:52.383899       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 22:55:52.383992       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 22:55:52.420072       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 22:55:52.420148       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 22:55:52.460013       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 22:55:52.460140       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 22:55:52.477853       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:55:52.477965       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 22:55:52.498127       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 22:55:52.498234       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 22:55:52.547416       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 22:55:52.547502       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 22:55:52.847567       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 22:55:52.847937       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 22:55:54.696321       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 22:50:21 UTC, ends at Mon 2023-07-17 23:12:10 UTC. --
	Jul 17 23:09:55 embed-certs-571296 kubelet[3869]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:09:55 embed-certs-571296 kubelet[3869]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:09:55 embed-certs-571296 kubelet[3869]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:10:04 embed-certs-571296 kubelet[3869]: E0717 23:10:04.195063    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:10:18 embed-certs-571296 kubelet[3869]: E0717 23:10:18.196150    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:10:32 embed-certs-571296 kubelet[3869]: E0717 23:10:32.194374    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:10:43 embed-certs-571296 kubelet[3869]: E0717 23:10:43.194600    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:10:54 embed-certs-571296 kubelet[3869]: E0717 23:10:54.194504    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:10:55 embed-certs-571296 kubelet[3869]: E0717 23:10:55.332465    3869 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 23:10:55 embed-certs-571296 kubelet[3869]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:10:55 embed-certs-571296 kubelet[3869]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:10:55 embed-certs-571296 kubelet[3869]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:10:55 embed-certs-571296 kubelet[3869]: E0717 23:10:55.427349    3869 container_manager_linux.go:515] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jul 17 23:11:05 embed-certs-571296 kubelet[3869]: E0717 23:11:05.195007    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:11:16 embed-certs-571296 kubelet[3869]: E0717 23:11:16.194233    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:11:31 embed-certs-571296 kubelet[3869]: E0717 23:11:31.194839    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:11:43 embed-certs-571296 kubelet[3869]: E0717 23:11:43.194103    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	Jul 17 23:11:55 embed-certs-571296 kubelet[3869]: E0717 23:11:55.333157    3869 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 23:11:55 embed-certs-571296 kubelet[3869]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:11:55 embed-certs-571296 kubelet[3869]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:11:55 embed-certs-571296 kubelet[3869]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:11:58 embed-certs-571296 kubelet[3869]: E0717 23:11:58.210159    3869 remote_image.go:167] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 23:11:58 embed-certs-571296 kubelet[3869]: E0717 23:11:58.210209    3869 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 23:11:58 embed-certs-571296 kubelet[3869]: E0717 23:11:58.210362    3869 kuberuntime_manager.go:1212] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7hrfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod metrics-server-74d5c6b9c-cknmm_kube-system(d1fb930f-518d-4ff4-94fe-7743ab55ecc6): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 23:11:58 embed-certs-571296 kubelet[3869]: E0717 23:11:58.210396    3869 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-74d5c6b9c-cknmm" podUID=d1fb930f-518d-4ff4-94fe-7743ab55ecc6
	
	* 
	* ==> storage-provisioner [9c19e84545ef3e263e1e085fc9ddc8cdcd5956ae253363bd3f8036c5dd347a87] <==
	* I0717 22:56:12.671565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 22:56:12.687180       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 22:56:12.687256       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 22:56:12.704619       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 22:56:12.705995       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-571296_9e5d8b9a-c6b9-4f0e-bad7-e5fc4765aad8!
	I0717 22:56:12.707640       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4fcc2d2a-fa66-4f1f-bc39-b898ddd2283a", APIVersion:"v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-571296_9e5d8b9a-c6b9-4f0e-bad7-e5fc4765aad8 became leader
	I0717 22:56:12.807132       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-571296_9e5d8b9a-c6b9-4f0e-bad7-e5fc4765aad8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-571296 -n embed-certs-571296
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-571296 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-cknmm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-571296 describe pod metrics-server-74d5c6b9c-cknmm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-571296 describe pod metrics-server-74d5c6b9c-cknmm: exit status 1 (67.097774ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-cknmm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-571296 describe pod metrics-server-74d5c6b9c-cknmm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (162.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (113.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-07-17 23:12:24.845628099 +0000 UTC m=+5508.635414099
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-504828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-504828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.23µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-504828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-504828 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-504828 logs -n 25: (1.128184473s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-504828  | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-332820             | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 22:45 UTC | 17 Jul 23 22:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-571296                 | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 22:46 UTC | 17 Jul 23 23:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-935524                  | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504828       | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 22:56 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504828 | jenkins | v1.31.0 | 17 Jul 23 22:47 UTC | 17 Jul 23 23:01 UTC |
	|         | default-k8s-diff-port-504828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-332820                              | old-k8s-version-332820       | jenkins | v1.31.0 | 17 Jul 23 23:10 UTC | 17 Jul 23 23:10 UTC |
	| start   | -p newest-cni-670356 --memory=2200 --alsologtostderr   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:10 UTC | 17 Jul 23 23:11 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-670356             | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:11 UTC | 17 Jul 23 23:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-670356                                   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:11 UTC | 17 Jul 23 23:11 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-670356                  | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:11 UTC | 17 Jul 23 23:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-670356 --memory=2200 --alsologtostderr   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:11 UTC | 17 Jul 23 23:12 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-571296                                  | embed-certs-571296           | jenkins | v1.31.0 | 17 Jul 23 23:12 UTC | 17 Jul 23 23:12 UTC |
	| ssh     | -p newest-cni-670356 sudo                              | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:12 UTC | 17 Jul 23 23:12 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-670356                                   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:12 UTC | 17 Jul 23 23:12 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-670356                                   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:12 UTC | 17 Jul 23 23:12 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-935524                                   | no-preload-935524            | jenkins | v1.31.0 | 17 Jul 23 23:12 UTC | 17 Jul 23 23:12 UTC |
	| delete  | -p newest-cni-670356                                   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:12 UTC | 17 Jul 23 23:12 UTC |
	| start   | -p kindnet-987609                                      | kindnet-987609               | jenkins | v1.31.0 | 17 Jul 23 23:12 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-670356                                   | newest-cni-670356            | jenkins | v1.31.0 | 17 Jul 23 23:12 UTC | 17 Jul 23 23:12 UTC |
	| start   | -p calico-987609 --memory=3072                         | calico-987609                | jenkins | v1.31.0 | 17 Jul 23 23:12 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 23:12:17
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 23:12:17.692631   60956 out.go:296] Setting OutFile to fd 1 ...
	I0717 23:12:17.692768   60956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:12:17.692778   60956 out.go:309] Setting ErrFile to fd 2...
	I0717 23:12:17.692785   60956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 23:12:17.693000   60956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 23:12:17.693657   60956 out.go:303] Setting JSON to false
	I0717 23:12:17.694613   60956 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10490,"bootTime":1689625048,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 23:12:17.694678   60956 start.go:138] virtualization: kvm guest
	I0717 23:12:17.697098   60956 out.go:177] * [calico-987609] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 23:12:17.698696   60956 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 23:12:17.698705   60956 notify.go:220] Checking for updates...
	I0717 23:12:17.700365   60956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 23:12:17.703088   60956 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 23:12:17.704546   60956 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 23:12:17.705969   60956 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 23:12:17.707344   60956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 23:12:17.709408   60956 config.go:182] Loaded profile config "auto-987609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:12:17.709580   60956 config.go:182] Loaded profile config "default-k8s-diff-port-504828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:12:17.709696   60956 config.go:182] Loaded profile config "kindnet-987609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 23:12:17.709822   60956 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 23:12:17.744744   60956 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 23:12:17.746219   60956 start.go:298] selected driver: kvm2
	I0717 23:12:17.746234   60956 start.go:880] validating driver "kvm2" against <nil>
	I0717 23:12:17.746246   60956 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 23:12:17.747180   60956 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 23:12:17.747260   60956 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 23:12:17.762523   60956 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 23:12:17.762591   60956 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 23:12:17.762850   60956 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 23:12:17.762887   60956 cni.go:84] Creating CNI manager for "calico"
	I0717 23:12:17.762894   60956 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0717 23:12:17.762907   60956 start_flags.go:319] config:
	{Name:calico-987609 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:calico-987609 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 23:12:17.763113   60956 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 23:12:17.766271   60956 out.go:177] * Starting control plane node calico-987609 in cluster calico-987609
	I0717 23:12:17.306300   60855 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 23:12:17.306357   60855 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 23:12:17.306377   60855 cache.go:57] Caching tarball of preloaded images
	I0717 23:12:17.306462   60855 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 23:12:17.306483   60855 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 23:12:17.306610   60855 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/kindnet-987609/config.json ...
	I0717 23:12:17.306632   60855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/kindnet-987609/config.json: {Name:mka79f0b64deea2b00915e815b55f059e46ca146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:12:17.306777   60855 start.go:365] acquiring machines lock for kindnet-987609: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 23:12:18.621253   60278 main.go:141] libmachine: (auto-987609) DBG | domain auto-987609 has defined MAC address 52:54:00:f0:80:d3 in network mk-auto-987609
	I0717 23:12:18.621848   60278 main.go:141] libmachine: (auto-987609) DBG | unable to find current IP address of domain auto-987609 in network mk-auto-987609
	I0717 23:12:18.621882   60278 main.go:141] libmachine: (auto-987609) DBG | I0717 23:12:18.621807   60318 retry.go:31] will retry after 1.304643006s: waiting for machine to come up
	I0717 23:12:19.928191   60278 main.go:141] libmachine: (auto-987609) DBG | domain auto-987609 has defined MAC address 52:54:00:f0:80:d3 in network mk-auto-987609
	I0717 23:12:19.928624   60278 main.go:141] libmachine: (auto-987609) DBG | unable to find current IP address of domain auto-987609 in network mk-auto-987609
	I0717 23:12:19.928641   60278 main.go:141] libmachine: (auto-987609) DBG | I0717 23:12:19.928579   60318 retry.go:31] will retry after 1.757611853s: waiting for machine to come up
	I0717 23:12:21.688789   60278 main.go:141] libmachine: (auto-987609) DBG | domain auto-987609 has defined MAC address 52:54:00:f0:80:d3 in network mk-auto-987609
	I0717 23:12:21.689354   60278 main.go:141] libmachine: (auto-987609) DBG | unable to find current IP address of domain auto-987609 in network mk-auto-987609
	I0717 23:12:21.689384   60278 main.go:141] libmachine: (auto-987609) DBG | I0717 23:12:21.689317   60318 retry.go:31] will retry after 2.219815297s: waiting for machine to come up
	I0717 23:12:17.767787   60956 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 23:12:17.767849   60956 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 23:12:17.767862   60956 cache.go:57] Caching tarball of preloaded images
	I0717 23:12:17.767984   60956 preload.go:174] Found /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 23:12:17.768000   60956 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 23:12:17.768140   60956 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/calico-987609/config.json ...
	I0717 23:12:17.768167   60956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/calico-987609/config.json: {Name:mkdb2f6832be9aa338d8f80d688c36e206df9bcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 23:12:17.768359   60956 start.go:365] acquiring machines lock for calico-987609: {Name:mk8a02d78ba6485e1a7d39c2e7feff944c6a9dbf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-07-17 22:51:06 UTC, ends at Mon 2023-07-17 23:12:25 UTC. --
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.368361898Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a8fa0b43-6819-467e-9c15-efd6b11a53ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.368575966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a8fa0b43-6819-467e-9c15-efd6b11a53ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.403985288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=de6fa621-81c8-49fd-b3e1-d89e233f3cc7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.404049179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=de6fa621-81c8-49fd-b3e1-d89e233f3cc7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.404311780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=de6fa621-81c8-49fd-b3e1-d89e233f3cc7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.441063565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=56faef06-a2a8-46dc-a604-37fc12135e32 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.441213489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=56faef06-a2a8-46dc-a604-37fc12135e32 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.441380817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=56faef06-a2a8-46dc-a604-37fc12135e32 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.477245037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=be150abd-3be1-4b41-baa1-8464161671f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.477306987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=be150abd-3be1-4b41-baa1-8464161671f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.477454778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=be150abd-3be1-4b41-baa1-8464161671f1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.511833463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ebd3a84d-07be-4cbf-868a-82f16961e345 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.511936132Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ebd3a84d-07be-4cbf-868a-82f16961e345 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.512212816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ebd3a84d-07be-4cbf-868a-82f16961e345 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.549417023Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e8132106-539a-449b-b7a1-433bd1eec7c4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.550391665Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:143e863029a8af165e177f14434e0225c24067d1222e78f60c16f57311164e06,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5c6b9c-j8f2f,Uid:328c892b-7402-480b-bc29-a316c8fb7b1f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634613839926746,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5c6b9c-j8f2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328c892b-7402-480b-bc29-a316c8fb7b1f,k8s-app: metrics-server,pod-template-hash: 74d5c6b9c,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:56:53.500939009Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0b840b23-0b6f-4ed1-9ae5-d96311b
3b9f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634613749240453,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\
",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-07-17T22:56:53.394030217Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&PodSandboxMetadata{Name:coredns-5d78c9869d-rqcjj,Uid:9f3bc4cf-fb20-413e-b367-27bcb997ab80,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634612130362844,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,k8s-app: kube-dns,pod-template-hash: 5d78c9869d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:56:50.872942510Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&PodSandboxMetadata{Name:kube-proxy-nmtc8,Uid:1f8a0182-d1df-4609-
86d1-7695a138e32f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634610766671966,Labels:map[string]string{controller-revision-hash: 56999f657b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-07-17T22:56:50.425708889Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-504828,Uid:c582554d88f95b3e9388f572e4b1d141,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634588356658139,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: c582554d88f95b3e9388f572e4b1d141,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c582554d88f95b3e9388f572e4b1d141,kubernetes.io/config.seen: 2023-07-17T22:56:27.821830041Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-504828,Uid:a62d210fd6117c9b32e321081bbd5097,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634588334612841,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32e321081bbd5097,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.118:8444,kubernetes.io/config.hash: a62d210fd6117c9b32e321081bbd5097,kubernetes.io/config.seen: 2023-07-17T2
2:56:27.821826064Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-504828,Uid:fcb438e83409f958bacca207d1e163b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634588329279867,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958bacca207d1e163b7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.118:2379,kubernetes.io/config.hash: fcb438e83409f958bacca207d1e163b7,kubernetes.io/config.seen: 2023-07-17T22:56:27.821831043Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-
diff-port-504828,Uid:06fd590960bf1556b243af4f4ad60e82,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1689634588281829974,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 06fd590960bf1556b243af4f4ad60e82,kubernetes.io/config.seen: 2023-07-17T22:56:27.821828779Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=e8132106-539a-449b-b7a1-433bd1eec7c4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.555916077Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=01679b24-148d-4946-a27c-09f1f60ca0f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.556097774Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=01679b24-148d-4946-a27c-09f1f60ca0f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.556363117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=01679b24-148d-4946-a27c-09f1f60ca0f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.574238400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5a21fb69-3672-4993-a930-e2a76b3381f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.574334951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5a21fb69-3672-4993-a930-e2a76b3381f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.574515695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5a21fb69-3672-4993-a930-e2a76b3381f8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.605732237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=383e3050-2ed5-40ea-9bcf-bb7a82beb316 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.605797975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=383e3050-2ed5-40ea-9bcf-bb7a82beb316 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jul 17 23:12:25 default-k8s-diff-port-504828 crio[724]: time="2023-07-17 23:12:25.605947024Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6,PodSandboxId:4eacdea27ce5a7ae9042ae2daa9f8d25e3e2d3daf10783ef9e8790c0b763c11d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1689634615489949724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b840b23-0b6f-4ed1-9ae5-d96311b3b9f1,},Annotations:map[string]string{io.kubernetes.container.hash: d7492b86,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223,PodSandboxId:95235fcef3c171b2f22860f4a8b5d6ba7a237ff1e11632960d02df56d45cd22e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1689634614300006420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d78c9869d-rqcjj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3bc4cf-fb20-413e-b367-27bcb997ab80,},Annotations:map[string]string{io.kubernetes.container.hash: 13018200,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524,PodSandboxId:3c32ee122426654c167d57395683946f1113e907775466fe6a8588dde9617912,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f,State:CONTAINER_RUNNING,CreatedAt:1689634612047436890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmtc8,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1f8a0182-d1df-4609-86d1-7695a138e32f,},Annotations:map[string]string{io.kubernetes.container.hash: 814b84bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab,PodSandboxId:380099398de21ffca07cb840c91d32555a4f88f84b60395fdccc8a384bf7db1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,State:CONTAINER_RUNNING,CreatedAt:1689634589624955671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcb438e83409f958b
acca207d1e163b7,},Annotations:map[string]string{io.kubernetes.container.hash: f860049,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad,PodSandboxId:fc683fbf695faa6e4f0f78ddef9635955ce1070e4d8ffe6461110e907e8c94f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082,State:CONTAINER_RUNNING,CreatedAt:1689634589388737757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c582554d88f95b3e93
88f572e4b1d141,},Annotations:map[string]string{io.kubernetes.container.hash: 159e1046,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0,PodSandboxId:dcc733eb7ced41fe33d429df849257fbff00840c78790efdaf5106779c9e59a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb,State:CONTAINER_RUNNING,CreatedAt:1689634589131405003,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a62d210fd6117c9b32
e321081bbd5097,},Annotations:map[string]string{io.kubernetes.container.hash: 6a2a0900,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10,PodSandboxId:5456e993c07cf169db6fdcc6dd7ec81585827045bcbb22aa2aad5005b777c698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e,State:CONTAINER_RUNNING,CreatedAt:1689634588860913827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504828,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 06fd590960bf1556b243af4f4ad60e82,},Annotations:map[string]string{io.kubernetes.container.hash: ab386da0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=383e3050-2ed5-40ea-9bcf-bb7a82beb316 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	4633e9baf3307       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   4eacdea27ce5a
	30afb33a6d03f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   95235fcef3c17
	a74d33cec1e84       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   15 minutes ago      Running             kube-proxy                0                   3c32ee1224266
	7267626b74cd3       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   15 minutes ago      Running             etcd                      2                   380099398de21
	4ea5728b3af9b       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   15 minutes ago      Running             kube-scheduler            2                   fc683fbf695fa
	45949cc457a02       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   15 minutes ago      Running             kube-apiserver            2                   dcc733eb7ced4
	7853c0ad23d63       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   15 minutes ago      Running             kube-controller-manager   2                   5456e993c07cf
	
	* 
	* ==> coredns [30afb33a6d03fa2e16d83fdf01e5980db876c37a9791463dc2b8add948108223] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51522 - 14800 "HINFO IN 351336808927452243.1519533743927132388. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00752014s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-504828
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-504828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=default-k8s-diff-port-504828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_56_37_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:56:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-504828
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 23:12:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 23:12:16 +0000   Mon, 17 Jul 2023 22:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 23:12:16 +0000   Mon, 17 Jul 2023 22:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 23:12:16 +0000   Mon, 17 Jul 2023 22:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 23:12:16 +0000   Mon, 17 Jul 2023 22:56:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.118
	  Hostname:    default-k8s-diff-port-504828
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 968e7df5c4a84974bf4bfbd3b75f21df
	  System UUID:                968e7df5-c4a8-4974-bf4b-fbd3b75f21df
	  Boot ID:                    92ae26ce-42b7-4dfc-887b-53002c0c83b2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-rqcjj                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-default-k8s-diff-port-504828                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-504828             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-504828    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-nmtc8                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-default-k8s-diff-port-504828             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-74d5c6b9c-j8f2f                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node default-k8s-diff-port-504828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node default-k8s-diff-port-504828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node default-k8s-diff-port-504828 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m   kubelet          Node default-k8s-diff-port-504828 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node default-k8s-diff-port-504828 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node default-k8s-diff-port-504828 event: Registered Node default-k8s-diff-port-504828 in Controller
	
	* 
	* ==> dmesg <==
	* [Jul17 22:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076290] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul17 22:51] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.571021] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.164202] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.726842] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.651350] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.114891] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.194174] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.126073] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[  +0.235906] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[ +17.537823] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[ +16.404303] kauditd_printk_skb: 24 callbacks suppressed
	[Jul17 22:56] systemd-fstab-generator[3500]: Ignoring "noauto" for root device
	[  +9.810584] systemd-fstab-generator[3821]: Ignoring "noauto" for root device
	[ +23.725586] kauditd_printk_skb: 11 callbacks suppressed
	
	* 
	* ==> etcd [7267626b74cd38d8361753caad407ab699f9fb71aa6b4735e6a829d5163faeab] <==
	* {"level":"info","ts":"2023-07-17T22:56:31.535Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:56:31.536Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.72.118:2379"}
	{"level":"info","ts":"2023-07-17T22:56:31.537Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:56:31.537Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T22:56:31.564Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:56:31.575Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T22:56:31.580Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa04419eb9ff79c4","local-member-id":"adc6509a13463106","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:56:31.580Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:56:31.580Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:06:31.600Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":688}
	{"level":"info","ts":"2023-07-17T23:06:31.603Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":688,"took":"2.540855ms","hash":3055155226}
	{"level":"info","ts":"2023-07-17T23:06:31.603Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3055155226,"revision":688,"compact-revision":-1}
	{"level":"warn","ts":"2023-07-17T23:10:41.232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.002574ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T23:10:41.233Z","caller":"traceutil/trace.go:171","msg":"trace[1425777834] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:1133; }","duration":"133.652223ms","start":"2023-07-17T23:10:41.099Z","end":"2023-07-17T23:10:41.233Z","steps":["trace[1425777834] 'count revisions from in-memory index tree'  (duration: 132.808055ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:11:31.612Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":931}
	{"level":"info","ts":"2023-07-17T23:11:31.614Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":931,"took":"1.384144ms","hash":3256565052}
	{"level":"info","ts":"2023-07-17T23:11:31.614Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3256565052,"revision":931,"compact-revision":688}
	{"level":"info","ts":"2023-07-17T23:11:49.980Z","caller":"traceutil/trace.go:171","msg":"trace[1978724415] transaction","detail":"{read_only:false; response_revision:1189; number_of_response:1; }","duration":"826.339008ms","start":"2023-07-17T23:11:49.153Z","end":"2023-07-17T23:11:49.979Z","steps":["trace[1978724415] 'process raft request'  (duration: 826.17543ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:11:49.981Z","caller":"traceutil/trace.go:171","msg":"trace[683068350] linearizableReadLoop","detail":"{readStateIndex:1388; appliedIndex:1388; }","duration":"417.735503ms","start":"2023-07-17T23:11:49.563Z","end":"2023-07-17T23:11:49.981Z","steps":["trace[683068350] 'read index received'  (duration: 417.728317ms)","trace[683068350] 'applied index is now lower than readState.Index'  (duration: 6.121µs)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T23:11:49.981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.368815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T23:11:49.983Z","caller":"traceutil/trace.go:171","msg":"trace[357210893] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1189; }","duration":"228.129031ms","start":"2023-07-17T23:11:49.755Z","end":"2023-07-17T23:11:49.983Z","steps":["trace[357210893] 'agreement among raft nodes before linearized reading'  (duration: 226.283701ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:11:49.983Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"419.910255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T23:11:49.983Z","caller":"traceutil/trace.go:171","msg":"trace[1928141109] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1189; }","duration":"420.018983ms","start":"2023-07-17T23:11:49.563Z","end":"2023-07-17T23:11:49.983Z","steps":["trace[1928141109] 'agreement among raft nodes before linearized reading'  (duration: 419.869403ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T23:11:49.983Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T23:11:49.153Z","time spent":"826.595365ms","remote":"127.0.0.1:51654","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1188 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-07-17T23:11:49.983Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T23:11:49.563Z","time spent":"420.105428ms","remote":"127.0.0.1:51658","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":29,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	
	* 
	* ==> kernel <==
	*  23:12:25 up 21 min,  0 users,  load average: 0.17, 0.22, 0.25
	Linux default-k8s-diff-port-504828 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [45949cc457a0288087200a6d26f2bf420c4dc3a488e50b764458aadd36d7d2f0] <==
	* I0717 23:09:33.577106       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 23:09:34.746873       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:09:34.747048       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:09:34.747121       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:09:34.748021       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:09:34.748086       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:09:34.749197       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:10:33.576625       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.159.95:443: connect: connection refused
	I0717 23:10:33.576769       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 23:11:33.576021       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.159.95:443: connect: connection refused
	I0717 23:11:33.576100       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 23:11:33.748529       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.111.159.95:443: connect: connection refused
	I0717 23:11:33.748639       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0717 23:11:34.748683       1 handler_proxy.go:100] no RequestInfo found in the context
	W0717 23:11:34.748749       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:11:34.749016       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:11:34.749054       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0717 23:11:34.749223       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:11:34.750371       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:11:49.984642       1 trace.go:219] Trace[1627310732]: "Update" accept:application/json, */*,audit-id:59a849b0-5fb5-4fe2-ac23-723d4110cf67,client:192.168.72.118,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (17-Jul-2023 23:11:49.152) (total time: 832ms):
	Trace[1627310732]: ["GuaranteedUpdate etcd3" audit-id:59a849b0-5fb5-4fe2-ac23-723d4110cf67,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 831ms (23:11:49.152)
	Trace[1627310732]:  ---"Txn call completed" 831ms (23:11:49.984)]
	Trace[1627310732]: [832.203946ms] [832.203946ms] END
	
	* 
	* ==> kube-controller-manager [7853c0ad23d63caced49f0a854641736b93d4b49d60c08de0dd6585cecb3eb10] <==
	* W0717 23:06:20.499439       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:06:49.972042       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:06:50.507935       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:07:19.978257       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:07:20.517118       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:07:49.985867       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:07:50.526972       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:08:19.990868       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:08:20.536329       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:08:49.996767       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:08:50.544633       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:09:20.003079       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:09:20.554694       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:09:50.009832       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:09:50.564096       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:10:20.017540       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:10:20.574820       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:10:50.023641       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:10:50.587078       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:11:20.028884       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:11:20.597594       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:11:50.035551       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:11:50.606338       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	E0717 23:12:20.041910       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	W0717 23:12:20.615629       1 garbagecollector.go:816] failed to discover some groups: map[metrics.k8s.io/v1beta1:stale GroupVersion discovery: metrics.k8s.io/v1beta1]
	
	* 
	* ==> kube-proxy [a74d33cec1e84be792a70b35e466bfcbb5f3c6ece58b41fd6427362967bdb524] <==
	* I0717 22:56:55.222387       1 node.go:141] Successfully retrieved node IP: 192.168.72.118
	I0717 22:56:55.222672       1 server_others.go:110] "Detected node IP" address="192.168.72.118"
	I0717 22:56:55.222763       1 server_others.go:554] "Using iptables proxy"
	I0717 22:56:55.312287       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0717 22:56:55.312396       1 server_others.go:192] "Using iptables Proxier"
	I0717 22:56:55.312533       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 22:56:55.313643       1 server.go:658] "Version info" version="v1.27.3"
	I0717 22:56:55.313813       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:56:55.318002       1 config.go:188] "Starting service config controller"
	I0717 22:56:55.318490       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 22:56:55.318761       1 config.go:97] "Starting endpoint slice config controller"
	I0717 22:56:55.319002       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 22:56:55.330515       1 config.go:315] "Starting node config controller"
	I0717 22:56:55.334251       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 22:56:55.418967       1 shared_informer.go:318] Caches are synced for service config
	I0717 22:56:55.419231       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 22:56:55.435766       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4ea5728b3af9be3f801f97cbd6df800b04a2f9b2288a10d5e0f37ee192cc28ad] <==
	* W0717 22:56:33.792004       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:56:33.792074       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 22:56:33.792252       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:56:33.795248       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 22:56:33.795415       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:56:33.795535       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 22:56:33.796113       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 22:56:33.796241       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 22:56:33.796697       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:56:33.796897       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 22:56:33.798350       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 22:56:33.798611       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 22:56:34.626320       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:56:34.626449       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 22:56:34.646934       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 22:56:34.647017       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 22:56:34.691493       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 22:56:34.691544       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 22:56:34.753196       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:56:34.753252       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 22:56:34.857013       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:56:34.857246       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 22:56:35.263941       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 22:56:35.263998       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 22:56:37.365604       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-07-17 22:51:06 UTC, ends at Mon 2023-07-17 23:12:26 UTC. --
	Jul 17 23:09:37 default-k8s-diff-port-504828 kubelet[3828]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:09:37 default-k8s-diff-port-504828 kubelet[3828]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:09:37 default-k8s-diff-port-504828 kubelet[3828]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:09:42 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:09:42.549322    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:09:57 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:09:57.549542    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:10:08 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:10:08.548731    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:10:19 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:10:19.551502    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:10:34 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:10:34.549266    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:10:37 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:10:37.638828    3828 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 23:10:37 default-k8s-diff-port-504828 kubelet[3828]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:10:37 default-k8s-diff-port-504828 kubelet[3828]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:10:37 default-k8s-diff-port-504828 kubelet[3828]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:10:46 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:10:46.548422    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:10:58 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:10:58.548844    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:11:10 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:11:10.549057    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:11:21 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:11:21.549651    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:11:36 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:11:36.548725    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:11:37 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:11:37.641948    3828 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 17 23:11:37 default-k8s-diff-port-504828 kubelet[3828]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 23:11:37 default-k8s-diff-port-504828 kubelet[3828]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 23:11:37 default-k8s-diff-port-504828 kubelet[3828]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 17 23:11:37 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:11:37.733457    3828 container_manager_linux.go:515] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jul 17 23:11:47 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:11:47.549710    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:12:00 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:12:00.549370    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	Jul 17 23:12:14 default-k8s-diff-port-504828 kubelet[3828]: E0717 23:12:14.549194    3828 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-74d5c6b9c-j8f2f" podUID=328c892b-7402-480b-bc29-a316c8fb7b1f
	
	* 
	* ==> storage-provisioner [4633e9baf3307d5e857b44f4930cd3bb8fa8520ace7fe6ba208546c1b165fcb6] <==
	* I0717 22:56:55.650544       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 22:56:55.661900       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 22:56:55.662045       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 22:56:55.675926       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 22:56:55.677091       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c339a55d-3fdc-4f37-b597-026e65addd23", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-504828_69d84a11-bd06-4f89-90fb-b0fd139857e2 became leader
	I0717 22:56:55.677264       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-504828_69d84a11-bd06-4f89-90fb-b0fd139857e2!
	I0717 22:56:55.781523       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-504828_69d84a11-bd06-4f89-90fb-b0fd139857e2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-504828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-j8f2f
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-504828 describe pod metrics-server-74d5c6b9c-j8f2f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-504828 describe pod metrics-server-74d5c6b9c-j8f2f: exit status 1 (64.083218ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-j8f2f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-504828 describe pod metrics-server-74d5c6b9c-j8f2f: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (113.05s)
E0717 23:15:31.229448   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:15:31.747586   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 23:15:33.396578   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:15:36.423634   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory

                                                
                                    

Test pass (225/288)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.19
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.3/json-events 4.76
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.13
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
19 TestBinaryMirror 0.53
20 TestOffline 139.02
22 TestAddons/Setup 142.69
24 TestAddons/parallel/Registry 16.47
26 TestAddons/parallel/InspektorGadget 11.77
27 TestAddons/parallel/MetricsServer 6.52
28 TestAddons/parallel/HelmTiller 11.34
30 TestAddons/parallel/CSI 60.23
31 TestAddons/parallel/Headlamp 14.66
32 TestAddons/parallel/CloudSpanner 6.32
35 TestAddons/serial/GCPAuth/Namespaces 0.13
37 TestCertOptions 71.61
38 TestCertExpiration 247.59
40 TestForceSystemdFlag 50.13
41 TestForceSystemdEnv 70.46
43 TestKVMDriverInstallOrUpdate 1.62
47 TestErrorSpam/setup 43.63
48 TestErrorSpam/start 0.33
49 TestErrorSpam/status 0.72
50 TestErrorSpam/pause 1.45
51 TestErrorSpam/unpause 1.6
52 TestErrorSpam/stop 2.21
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 61.75
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 50.74
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.09
64 TestFunctional/serial/CacheCmd/cache/add_local 1.01
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 37.09
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.37
75 TestFunctional/serial/LogsFileCmd 1.39
76 TestFunctional/serial/InvalidService 4.24
78 TestFunctional/parallel/ConfigCmd 0.31
79 TestFunctional/parallel/DashboardCmd 30.76
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.13
82 TestFunctional/parallel/StatusCmd 1.02
86 TestFunctional/parallel/ServiceCmdConnect 8.75
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 27.68
90 TestFunctional/parallel/SSHCmd 0.5
91 TestFunctional/parallel/CpCmd 1.01
92 TestFunctional/parallel/MySQL 33.16
93 TestFunctional/parallel/FileSync 0.24
94 TestFunctional/parallel/CertSync 1.71
98 TestFunctional/parallel/NodeLabels 0.07
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
103 TestFunctional/parallel/ServiceCmd/DeployApp 15.27
104 TestFunctional/parallel/Version/short 0.05
105 TestFunctional/parallel/Version/components 1.02
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
110 TestFunctional/parallel/ImageCommands/ImageBuild 2.72
111 TestFunctional/parallel/ImageCommands/Setup 1.02
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
115 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.58
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
117 TestFunctional/parallel/ProfileCmd/profile_list 0.34
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
119 TestFunctional/parallel/MountCmd/any-port 24.85
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.84
121 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 14.05
122 TestFunctional/parallel/ServiceCmd/List 0.27
123 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
124 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
125 TestFunctional/parallel/ServiceCmd/Format 0.49
126 TestFunctional/parallel/ServiceCmd/URL 0.37
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.45
128 TestFunctional/parallel/ImageCommands/ImageRemove 1.06
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.83
130 TestFunctional/parallel/MountCmd/specific-port 2.38
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.37
142 TestFunctional/delete_addon-resizer_images 0.07
143 TestFunctional/delete_my-image_image 0.01
144 TestFunctional/delete_minikube_cached_images 0.01
148 TestIngressAddonLegacy/StartLegacyK8sCluster 108.26
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.53
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.58
155 TestJSONOutput/start/Command 99
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.63
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.61
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 7.08
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.18
183 TestMainNoArgs 0.04
184 TestMinikubeProfile 98.45
187 TestMountStart/serial/StartWithMountFirst 25.39
188 TestMountStart/serial/VerifyMountFirst 0.38
189 TestMountStart/serial/StartWithMountSecond 27.29
190 TestMountStart/serial/VerifyMountSecond 0.38
191 TestMountStart/serial/DeleteFirst 0.72
192 TestMountStart/serial/VerifyMountPostDelete 0.38
193 TestMountStart/serial/Stop 1.08
194 TestMountStart/serial/RestartStopped 21.39
195 TestMountStart/serial/VerifyMountPostStop 0.37
198 TestMultiNode/serial/FreshStart2Nodes 105.25
199 TestMultiNode/serial/DeployApp2Nodes 4.16
201 TestMultiNode/serial/AddNode 42.44
202 TestMultiNode/serial/ProfileList 0.21
203 TestMultiNode/serial/CopyFile 7.15
204 TestMultiNode/serial/StopNode 2.93
205 TestMultiNode/serial/StartAfterStop 27.9
207 TestMultiNode/serial/DeleteNode 1.57
209 TestMultiNode/serial/RestartMultiNode 443.98
210 TestMultiNode/serial/ValidateNameConflict 47.75
217 TestScheduledStopUnix 118.82
223 TestKubernetesUpgrade 137.79
235 TestStoppedBinaryUpgrade/Setup 0.34
241 TestNetworkPlugins/group/false 3
246 TestPause/serial/Start 88
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
249 TestNoKubernetes/serial/StartWithK8s 89.45
251 TestStoppedBinaryUpgrade/MinikubeLogs 0.4
253 TestStartStop/group/old-k8s-version/serial/FirstStart 165.42
254 TestNoKubernetes/serial/StartWithStopK8s 14.29
255 TestNoKubernetes/serial/Start 25.94
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
257 TestNoKubernetes/serial/ProfileList 29.65
258 TestNoKubernetes/serial/Stop 1.09
259 TestNoKubernetes/serial/StartNoArgs 21.32
261 TestStartStop/group/embed-certs/serial/FirstStart 119.38
263 TestStartStop/group/no-preload/serial/FirstStart 156.75
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
266 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 150.6
267 TestStartStop/group/old-k8s-version/serial/DeployApp 8.5
268 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.03
270 TestStartStop/group/embed-certs/serial/DeployApp 10.53
271 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.29
273 TestStartStop/group/no-preload/serial/DeployApp 9.5
274 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.46
275 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
277 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.26
280 TestStartStop/group/old-k8s-version/serial/SecondStart 807.67
282 TestStartStop/group/embed-certs/serial/SecondStart 813.05
285 TestStartStop/group/no-preload/serial/SecondStart 510.49
286 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 834
295 TestStartStop/group/newest-cni/serial/FirstStart 62.06
297 TestStartStop/group/newest-cni/serial/DeployApp 0
298 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.74
299 TestStartStop/group/newest-cni/serial/Stop 10.28
300 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
301 TestStartStop/group/newest-cni/serial/SecondStart 50.42
302 TestNetworkPlugins/group/auto/Start 101.03
303 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
304 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
305 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
306 TestStartStop/group/newest-cni/serial/Pause 2.81
307 TestNetworkPlugins/group/kindnet/Start 90.68
308 TestNetworkPlugins/group/calico/Start 144.48
309 TestNetworkPlugins/group/custom-flannel/Start 149.56
310 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
312 TestNetworkPlugins/group/kindnet/NetCatPod 11.44
313 TestNetworkPlugins/group/auto/KubeletFlags 0.25
314 TestNetworkPlugins/group/auto/NetCatPod 13.52
315 TestNetworkPlugins/group/kindnet/DNS 0.2
316 TestNetworkPlugins/group/kindnet/Localhost 0.18
317 TestNetworkPlugins/group/kindnet/HairPin 0.2
318 TestNetworkPlugins/group/auto/DNS 0.34
319 TestNetworkPlugins/group/auto/Localhost 0.2
320 TestNetworkPlugins/group/auto/HairPin 0.18
321 TestNetworkPlugins/group/enable-default-cni/Start 101.51
322 TestNetworkPlugins/group/flannel/Start 108.39
323 TestNetworkPlugins/group/calico/ControllerPod 5.03
324 TestNetworkPlugins/group/calico/KubeletFlags 0.21
325 TestNetworkPlugins/group/calico/NetCatPod 14.48
326 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
327 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.94
328 TestNetworkPlugins/group/calico/DNS 0.31
329 TestNetworkPlugins/group/calico/Localhost 0.21
330 TestNetworkPlugins/group/calico/HairPin 0.27
331 TestNetworkPlugins/group/custom-flannel/DNS 0.22
332 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
333 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
334 TestNetworkPlugins/group/bridge/Start 102.96
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.46
337 TestNetworkPlugins/group/flannel/ControllerPod 5.02
338 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
339 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
340 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
341 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
342 TestNetworkPlugins/group/flannel/NetCatPod 12.41
343 TestNetworkPlugins/group/flannel/DNS 0.18
344 TestNetworkPlugins/group/flannel/Localhost 0.17
345 TestNetworkPlugins/group/flannel/HairPin 0.18
346 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
347 TestNetworkPlugins/group/bridge/NetCatPod 10.42
348 TestNetworkPlugins/group/bridge/DNS 0.17
349 TestNetworkPlugins/group/bridge/Localhost 0.15
350 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (7.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-896488 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-896488 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.193327128s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-896488
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-896488: exit status 85 (55.037427ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-896488 | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC |          |
	|         | -p download-only-896488        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:40:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:40:36.278326   23002 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:40:36.278456   23002 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:40:36.278468   23002 out.go:309] Setting ErrFile to fd 2...
	I0717 21:40:36.278475   23002 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:40:36.278679   23002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	W0717 21:40:36.278795   23002 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16899-15759/.minikube/config/config.json: open /home/jenkins/minikube-integration/16899-15759/.minikube/config/config.json: no such file or directory
	I0717 21:40:36.279360   23002 out.go:303] Setting JSON to true
	I0717 21:40:36.280237   23002 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4988,"bootTime":1689625048,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:40:36.280296   23002 start.go:138] virtualization: kvm guest
	I0717 21:40:36.282948   23002 out.go:97] [download-only-896488] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:40:36.284850   23002 out.go:169] MINIKUBE_LOCATION=16899
	W0717 21:40:36.283043   23002 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 21:40:36.283080   23002 notify.go:220] Checking for updates...
	I0717 21:40:36.287932   23002 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:40:36.289355   23002 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 21:40:36.290914   23002 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 21:40:36.292373   23002 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 21:40:36.294837   23002 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 21:40:36.295110   23002 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:40:36.412010   23002 out.go:97] Using the kvm2 driver based on user configuration
	I0717 21:40:36.412032   23002 start.go:298] selected driver: kvm2
	I0717 21:40:36.412036   23002 start.go:880] validating driver "kvm2" against <nil>
	I0717 21:40:36.412320   23002 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:36.412436   23002 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16899-15759/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 21:40:36.426567   23002 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.0
	I0717 21:40:36.426637   23002 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:40:36.427120   23002 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 21:40:36.427300   23002 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 21:40:36.427337   23002 cni.go:84] Creating CNI manager for ""
	I0717 21:40:36.427363   23002 cni.go:152] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 21:40:36.427379   23002 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 21:40:36.427389   23002 start_flags.go:319] config:
	{Name:download-only-896488 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-896488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:40:36.427662   23002 iso.go:125] acquiring lock: {Name:mk0fc3164b0f4222cb2c3b274841202f1f6c0fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:36.429552   23002 out.go:97] Downloading VM boot image ...
	I0717 21:40:36.429597   23002 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/iso/amd64/minikube-v1.31.0-amd64.iso
	I0717 21:40:38.743934   23002 out.go:97] Starting control plane node download-only-896488 in cluster download-only-896488
	I0717 21:40:38.743961   23002 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 21:40:38.764961   23002 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0717 21:40:38.765003   23002 cache.go:57] Caching tarball of preloaded images
	I0717 21:40:38.765155   23002 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 21:40:38.766820   23002 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0717 21:40:38.766840   23002 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 21:40:38.790806   23002 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/16899-15759/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-896488"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (4.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-896488 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-896488 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.756661387s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (4.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-896488
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-896488: exit status 85 (56.4199ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-896488 | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC |          |
	|         | -p download-only-896488        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-896488 | jenkins | v1.31.0 | 17 Jul 23 21:40 UTC |          |
	|         | -p download-only-896488        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:40:43
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:40:43.530429   23059 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:40:43.530573   23059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:40:43.530582   23059 out.go:309] Setting ErrFile to fd 2...
	I0717 21:40:43.530587   23059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:40:43.530787   23059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	W0717 21:40:43.530897   23059 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16899-15759/.minikube/config/config.json: open /home/jenkins/minikube-integration/16899-15759/.minikube/config/config.json: no such file or directory
	I0717 21:40:43.531274   23059 out.go:303] Setting JSON to true
	I0717 21:40:43.532047   23059 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4996,"bootTime":1689625048,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:40:43.532103   23059 start.go:138] virtualization: kvm guest
	I0717 21:40:43.534078   23059 out.go:97] [download-only-896488] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:40:43.535510   23059 out.go:169] MINIKUBE_LOCATION=16899
	I0717 21:40:43.534253   23059 notify.go:220] Checking for updates...
	I0717 21:40:43.538235   23059 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:40:43.539629   23059 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 21:40:43.541090   23059 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 21:40:43.542497   23059 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-896488"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-896488
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-879454 --alsologtostderr --binary-mirror http://127.0.0.1:43523 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-879454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-879454
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestOffline (139.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-696820 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-696820 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m17.853774179s)
helpers_test.go:175: Cleaning up "offline-crio-696820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-696820
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-696820: (1.169721541s)
--- PASS: TestOffline (139.02s)

                                                
                                    
x
+
TestAddons/Setup (142.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-436248 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-436248 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m22.691633764s)
--- PASS: TestAddons/Setup (142.69s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 29.105457ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-7lx77" [10227c49-9b69-4d98-a71d-d2255449d1fd] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015071589s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-j6mzk" [cac045cb-1481-4983-a628-954619436235] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012218758s
addons_test.go:316: (dbg) Run:  kubectl --context addons-436248 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-436248 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-436248 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.339487903s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-436248 ip
2023/07/17 21:43:27 [DEBUG] GET http://192.168.39.220:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-436248 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4vzf5" [f78eb00f-3716-42f4-b9f5-be464e34514a] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013240997s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-436248
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-436248: (6.755979457s)
--- PASS: TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 29.010644ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-7bttp" [5a8b487f-451d-4f4a-9963-5a0a1498b248] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.015201499s
addons_test.go:391: (dbg) Run:  kubectl --context addons-436248 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-436248 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-436248 addons disable metrics-server --alsologtostderr -v=1: (1.393119439s)
--- PASS: TestAddons/parallel/MetricsServer (6.52s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.34s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 7.598108ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-8dj6l" [2e6dcebd-1b5f-43e2-a558-6b96996ffc39] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.043423133s
addons_test.go:449: (dbg) Run:  kubectl --context addons-436248 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-436248 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.067416801s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-436248 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p addons-436248 addons disable helm-tiller --alsologtostderr -v=1: (1.219405339s)
--- PASS: TestAddons/parallel/HelmTiller (11.34s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.992057ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-436248 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-436248 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [64e5fc35-ed70-41d3-9847-283007356211] Pending
helpers_test.go:344: "task-pv-pod" [64e5fc35-ed70-41d3-9847-283007356211] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [64e5fc35-ed70-41d3-9847-283007356211] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.01490198s
addons_test.go:560: (dbg) Run:  kubectl --context addons-436248 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-436248 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-436248 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-436248 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-436248 delete pod task-pv-pod: (1.258184939s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-436248 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-436248 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436248 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-436248 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7ef50906-251d-43f3-800f-20cca6d589db] Pending
helpers_test.go:344: "task-pv-pod-restore" [7ef50906-251d-43f3-800f-20cca6d589db] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7ef50906-251d-43f3-800f-20cca6d589db] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.023103991s
addons_test.go:602: (dbg) Run:  kubectl --context addons-436248 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-436248 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-436248 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-436248 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-436248 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.804159083s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-436248 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.23s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-436248 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-436248 --alsologtostderr -v=1: (1.640881677s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-97mrg" [07438fa1-1bae-4ea9-b549-f6cef51a1865] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-97mrg" [07438fa1-1bae-4ea9-b549-f6cef51a1865] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.018918726s
--- PASS: TestAddons/parallel/Headlamp (14.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-6jzp4" [3e368de2-11bf-4cd8-b342-f941f1b94e00] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014976334s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-436248
addons_test.go:836: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-436248: (1.268005865s)
--- PASS: TestAddons/parallel/CloudSpanner (6.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-436248 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-436248 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (71.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-259016 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0717 22:38:11.892173   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-259016 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m10.187416389s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-259016 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-259016 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-259016 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-259016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-259016
--- PASS: TestCertOptions (71.61s)

                                                
                                    
x
+
TestCertExpiration (247.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-366864 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-366864 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (47.644032594s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-366864 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-366864 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (19.066236915s)
helpers_test.go:175: Cleaning up "cert-expiration-366864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-366864
--- PASS: TestCertExpiration (247.59s)

                                                
                                    
x
+
TestForceSystemdFlag (50.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-201894 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-201894 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.995781894s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-201894 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-201894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-201894
--- PASS: TestForceSystemdFlag (50.13s)

                                                
                                    
x
+
TestForceSystemdEnv (70.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-939164 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-939164 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.567613365s)
helpers_test.go:175: Cleaning up "force-systemd-env-939164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-939164
--- PASS: TestForceSystemdEnv (70.46s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.62s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.62s)

                                                
                                    
x
+
TestErrorSpam/setup (43.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-972009 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-972009 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-972009 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-972009 --driver=kvm2  --container-runtime=crio: (43.633575583s)
--- PASS: TestErrorSpam/setup (43.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (2.21s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 stop: (2.075168435s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972009 --log_dir /tmp/nospam-972009 stop
--- PASS: TestErrorSpam/stop (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16899-15759/.minikube/files/etc/test/nested/copy/22990/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.75s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-767593 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-767593 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m1.749293717s)
--- PASS: TestFunctional/serial/StartWithProxy (61.75s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.74s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-767593 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-767593 --alsologtostderr -v=8: (50.735122956s)
functional_test.go:659: soft start took 50.735804924s for "functional-767593" cluster.
--- PASS: TestFunctional/serial/SoftStart (50.74s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-767593 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-767593 cache add registry.k8s.io/pause:3.3: (1.061564678s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-767593 cache add registry.k8s.io/pause:latest: (1.038891517s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-767593 /tmp/TestFunctionalserialCacheCmdcacheadd_local3550242331/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 cache add minikube-local-cache-test:functional-767593
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 cache delete minikube-local-cache-test:functional-767593
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-767593
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767593 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (202.421386ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 kubectl -- --context functional-767593 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-767593 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-767593 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-767593 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.092563726s)
functional_test.go:757: restart took 37.092684527s for "functional-767593" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.09s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-767593 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-767593 logs: (1.36499268s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 logs --file /tmp/TestFunctionalserialLogsFileCmd3189587893/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-767593 logs --file /tmp/TestFunctionalserialLogsFileCmd3189587893/001/logs.txt: (1.390722056s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-767593 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-767593
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-767593: exit status 115 (279.205265ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.120:30129 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-767593 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767593 config get cpus: exit status 14 (60.474334ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767593 config get cpus: exit status 14 (42.598918ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-767593 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-767593 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 29438: os: process already finished
E0717 21:53:17.012637   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DashboardCmd (30.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-767593 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-767593 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (143.909096ms)

                                                
                                                
-- stdout --
	* [functional-767593] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:52:44.996150   29308 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:52:44.996358   29308 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:52:44.996371   29308 out.go:309] Setting ErrFile to fd 2...
	I0717 21:52:44.996379   29308 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:52:44.996667   29308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 21:52:44.997418   29308 out.go:303] Setting JSON to false
	I0717 21:52:44.998690   29308 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5717,"bootTime":1689625048,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:52:44.998783   29308 start.go:138] virtualization: kvm guest
	I0717 21:52:45.001575   29308 out.go:177] * [functional-767593] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:52:45.003517   29308 notify.go:220] Checking for updates...
	I0717 21:52:45.003541   29308 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 21:52:45.005431   29308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:52:45.007147   29308 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 21:52:45.008767   29308 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 21:52:45.010458   29308 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 21:52:45.012085   29308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:52:45.013961   29308 config.go:182] Loaded profile config "functional-767593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:52:45.014342   29308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:52:45.014400   29308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:52:45.028804   29308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46153
	I0717 21:52:45.029227   29308 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:52:45.029850   29308 main.go:141] libmachine: Using API Version  1
	I0717 21:52:45.029876   29308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:52:45.030247   29308 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:52:45.030467   29308 main.go:141] libmachine: (functional-767593) Calling .DriverName
	I0717 21:52:45.030722   29308 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:52:45.031037   29308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:52:45.031080   29308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:52:45.045105   29308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
	I0717 21:52:45.045551   29308 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:52:45.046084   29308 main.go:141] libmachine: Using API Version  1
	I0717 21:52:45.046107   29308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:52:45.046400   29308 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:52:45.046601   29308 main.go:141] libmachine: (functional-767593) Calling .DriverName
	I0717 21:52:45.083169   29308 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 21:52:45.084720   29308 start.go:298] selected driver: kvm2
	I0717 21:52:45.084733   29308 start.go:880] validating driver "kvm2" against &{Name:functional-767593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-767
593 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.120 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:52:45.084872   29308 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:52:45.087192   29308 out.go:177] 
	W0717 21:52:45.088712   29308 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 21:52:45.090253   29308 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-767593 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-767593 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-767593 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.091437ms)

                                                
                                                
-- stdout --
	* [functional-767593] minikube v1.31.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:52:45.265049   29364 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:52:45.265240   29364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:52:45.265253   29364 out.go:309] Setting ErrFile to fd 2...
	I0717 21:52:45.265261   29364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:52:45.266059   29364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 21:52:45.266652   29364 out.go:303] Setting JSON to false
	I0717 21:52:45.267528   29364 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5717,"bootTime":1689625048,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:52:45.267591   29364 start.go:138] virtualization: kvm guest
	I0717 21:52:45.269909   29364 out.go:177] * [functional-767593] minikube v1.31.0 sur Ubuntu 20.04 (kvm/amd64)
	I0717 21:52:45.271563   29364 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 21:52:45.271538   29364 notify.go:220] Checking for updates...
	I0717 21:52:45.273131   29364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:52:45.274798   29364 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 21:52:45.276364   29364 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 21:52:45.277969   29364 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 21:52:45.279594   29364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:52:45.281364   29364 config.go:182] Loaded profile config "functional-767593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:52:45.281729   29364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:52:45.281789   29364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:52:45.296052   29364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0717 21:52:45.296416   29364 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:52:45.296959   29364 main.go:141] libmachine: Using API Version  1
	I0717 21:52:45.296981   29364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:52:45.297291   29364 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:52:45.297455   29364 main.go:141] libmachine: (functional-767593) Calling .DriverName
	I0717 21:52:45.297722   29364 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:52:45.298039   29364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 21:52:45.298100   29364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 21:52:45.312582   29364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0717 21:52:45.312977   29364 main.go:141] libmachine: () Calling .GetVersion
	I0717 21:52:45.313467   29364 main.go:141] libmachine: Using API Version  1
	I0717 21:52:45.313482   29364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 21:52:45.313799   29364 main.go:141] libmachine: () Calling .GetMachineName
	I0717 21:52:45.313980   29364 main.go:141] libmachine: (functional-767593) Calling .DriverName
	I0717 21:52:45.348307   29364 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0717 21:52:45.349729   29364 start.go:298] selected driver: kvm2
	I0717 21:52:45.349742   29364 start.go:880] validating driver "kvm2" against &{Name:functional-767593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-767
593 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.120 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:52:45.349878   29364 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:52:45.352126   29364 out.go:177] 
	W0717 21:52:45.353662   29364 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 21:52:45.355229   29364 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-767593 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-767593 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-s4lrj" [81dfeb01-0c86-4483-af28-8d373189e43c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-s4lrj" [81dfeb01-0c86-4483-af28-8d373189e43c] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.016049466s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.120:31244
functional_test.go:1674: http://192.168.39.120:31244: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-s4lrj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.120:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.120:31244
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
E0717 21:53:11.892429   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:53:11.898211   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:53:11.908570   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:53:11.928919   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:53:11.969346   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:53:12.049782   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:53:12.210227   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:53:12.530890   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:53:13.171807   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:53:14.452132   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [adbfe79f-8f8f-40ae-bc1d-005f806ce998] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015650233s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-767593 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-767593 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-767593 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-767593 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-767593 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d1fe7a08-ab47-42fe-a29c-68019034550f] Pending
helpers_test.go:344: "sp-pod" [d1fe7a08-ab47-42fe-a29c-68019034550f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d1fe7a08-ab47-42fe-a29c-68019034550f] Running
2023/07/17 21:53:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.014121828s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-767593 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-767593 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-767593 delete -f testdata/storage-provisioner/pod.yaml: (1.164572237s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-767593 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6bc7a208-72da-49d6-a2b1-82667acd797c] Pending
helpers_test.go:344: "sp-pod" [6bc7a208-72da-49d6-a2b1-82667acd797c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0717 21:53:22.133546   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [6bc7a208-72da-49d6-a2b1-82667acd797c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010740945s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-767593 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh -n functional-767593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 cp functional-767593:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1048613710/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh -n functional-767593 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (33.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-767593 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-tc7v8" [7501c89e-487a-467a-9b2b-ddcfa9059411] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-tc7v8" [7501c89e-487a-467a-9b2b-ddcfa9059411] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.074664856s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-767593 exec mysql-7db894d786-tc7v8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-767593 exec mysql-7db894d786-tc7v8 -- mysql -ppassword -e "show databases;": exit status 1 (610.080951ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-767593 exec mysql-7db894d786-tc7v8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-767593 exec mysql-7db894d786-tc7v8 -- mysql -ppassword -e "show databases;": exit status 1 (254.148351ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-767593 exec mysql-7db894d786-tc7v8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-767593 exec mysql-7db894d786-tc7v8 -- mysql -ppassword -e "show databases;": exit status 1 (277.910978ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-767593 exec mysql-7db894d786-tc7v8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (33.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/22990/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "sudo cat /etc/test/nested/copy/22990/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/22990.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "sudo cat /etc/ssl/certs/22990.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/22990.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "sudo cat /usr/share/ca-certificates/22990.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/229902.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "sudo cat /etc/ssl/certs/229902.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/229902.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "sudo cat /usr/share/ca-certificates/229902.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-767593 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767593 ssh "sudo systemctl is-active docker": exit status 1 (230.807548ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767593 ssh "sudo systemctl is-active containerd": exit status 1 (219.616154ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-767593 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-767593 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-pq7k5" [73b10aef-8cb1-4681-9f55-015191a42a95] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-pq7k5" [73b10aef-8cb1-4681-9f55-015191a42a95] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.024028162s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-767593 version -o=json --components: (1.022832195s)
--- PASS: TestFunctional/parallel/Version/components (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-767593 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-767593
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-767593
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-767593 image ls --format short --alsologtostderr:
I0717 21:53:04.401144   30352 out.go:296] Setting OutFile to fd 1 ...
I0717 21:53:04.401260   30352 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:53:04.401271   30352 out.go:309] Setting ErrFile to fd 2...
I0717 21:53:04.401278   30352 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:53:04.401478   30352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
I0717 21:53:04.402061   30352 config.go:182] Loaded profile config "functional-767593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:53:04.402179   30352 config.go:182] Loaded profile config "functional-767593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:53:04.402570   30352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 21:53:04.402629   30352 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:53:04.417599   30352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43275
I0717 21:53:04.418027   30352 main.go:141] libmachine: () Calling .GetVersion
I0717 21:53:04.418595   30352 main.go:141] libmachine: Using API Version  1
I0717 21:53:04.418617   30352 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:53:04.418950   30352 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:53:04.419124   30352 main.go:141] libmachine: (functional-767593) Calling .GetState
I0717 21:53:04.420924   30352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 21:53:04.420965   30352 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:53:04.435458   30352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
I0717 21:53:04.435824   30352 main.go:141] libmachine: () Calling .GetVersion
I0717 21:53:04.436256   30352 main.go:141] libmachine: Using API Version  1
I0717 21:53:04.436276   30352 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:53:04.436542   30352 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:53:04.436728   30352 main.go:141] libmachine: (functional-767593) Calling .DriverName
I0717 21:53:04.436905   30352 ssh_runner.go:195] Run: systemctl --version
I0717 21:53:04.436928   30352 main.go:141] libmachine: (functional-767593) Calling .GetSSHHostname
I0717 21:53:04.439661   30352 main.go:141] libmachine: (functional-767593) DBG | domain functional-767593 has defined MAC address 52:54:00:27:4c:1d in network mk-functional-767593
I0717 21:53:04.440109   30352 main.go:141] libmachine: (functional-767593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:1d", ip: ""} in network mk-functional-767593: {Iface:virbr1 ExpiryTime:2023-07-17 22:50:00 +0000 UTC Type:0 Mac:52:54:00:27:4c:1d Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:functional-767593 Clientid:01:52:54:00:27:4c:1d}
I0717 21:53:04.440146   30352 main.go:141] libmachine: (functional-767593) DBG | domain functional-767593 has defined IP address 192.168.39.120 and MAC address 52:54:00:27:4c:1d in network mk-functional-767593
I0717 21:53:04.440277   30352 main.go:141] libmachine: (functional-767593) Calling .GetSSHPort
I0717 21:53:04.440457   30352 main.go:141] libmachine: (functional-767593) Calling .GetSSHKeyPath
I0717 21:53:04.440627   30352 main.go:141] libmachine: (functional-767593) Calling .GetSSHUsername
I0717 21:53:04.440761   30352 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/functional-767593/id_rsa Username:docker}
I0717 21:53:04.579930   30352 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 21:53:04.641313   30352 main.go:141] libmachine: Making call to close driver server
I0717 21:53:04.641330   30352 main.go:141] libmachine: (functional-767593) Calling .Close
I0717 21:53:04.641606   30352 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:53:04.641629   30352 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:53:04.641667   30352 main.go:141] libmachine: (functional-767593) DBG | Closing plugin on server side
I0717 21:53:04.641775   30352 main.go:141] libmachine: Making call to close driver server
I0717 21:53:04.641800   30352 main.go:141] libmachine: (functional-767593) Calling .Close
I0717 21:53:04.642055   30352 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:53:04.642073   30352 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-767593 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.27.3            | 41697ceeb70b3 | 59.8MB |
| localhost/my-image                      | functional-767593  | 9c72f3672fef0 | 1.47MB |
| registry.k8s.io/kube-apiserver          | v1.27.3            | 08a0c939e61b7 | 122MB  |
| registry.k8s.io/kube-proxy              | v1.27.3            | 5780543258cf0 | 72.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 2be84dd575ee2 | 588MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-767593  | 7626bc3417b16 | 3.35kB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| gcr.io/google-containers/addon-resizer  | functional-767593  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| registry.k8s.io/kube-controller-manager | v1.27.3            | 7cffc01dba0e1 | 114MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-767593 image ls --format table --alsologtostderr:
I0717 21:53:07.888076   30560 out.go:296] Setting OutFile to fd 1 ...
I0717 21:53:07.888256   30560 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:53:07.888267   30560 out.go:309] Setting ErrFile to fd 2...
I0717 21:53:07.888274   30560 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:53:07.888603   30560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
I0717 21:53:07.889411   30560 config.go:182] Loaded profile config "functional-767593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:53:07.889574   30560 config.go:182] Loaded profile config "functional-767593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:53:07.890132   30560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 21:53:07.890191   30560 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:53:07.904487   30560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36815
I0717 21:53:07.904962   30560 main.go:141] libmachine: () Calling .GetVersion
I0717 21:53:07.905662   30560 main.go:141] libmachine: Using API Version  1
I0717 21:53:07.905688   30560 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:53:07.906055   30560 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:53:07.906265   30560 main.go:141] libmachine: (functional-767593) Calling .GetState
I0717 21:53:07.908288   30560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 21:53:07.908330   30560 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:53:07.922748   30560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37051
I0717 21:53:07.923229   30560 main.go:141] libmachine: () Calling .GetVersion
I0717 21:53:07.923751   30560 main.go:141] libmachine: Using API Version  1
I0717 21:53:07.923778   30560 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:53:07.924136   30560 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:53:07.924354   30560 main.go:141] libmachine: (functional-767593) Calling .DriverName
I0717 21:53:07.924590   30560 ssh_runner.go:195] Run: systemctl --version
I0717 21:53:07.924620   30560 main.go:141] libmachine: (functional-767593) Calling .GetSSHHostname
I0717 21:53:07.927784   30560 main.go:141] libmachine: (functional-767593) DBG | domain functional-767593 has defined MAC address 52:54:00:27:4c:1d in network mk-functional-767593
I0717 21:53:07.928350   30560 main.go:141] libmachine: (functional-767593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:1d", ip: ""} in network mk-functional-767593: {Iface:virbr1 ExpiryTime:2023-07-17 22:50:00 +0000 UTC Type:0 Mac:52:54:00:27:4c:1d Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:functional-767593 Clientid:01:52:54:00:27:4c:1d}
I0717 21:53:07.928381   30560 main.go:141] libmachine: (functional-767593) DBG | domain functional-767593 has defined IP address 192.168.39.120 and MAC address 52:54:00:27:4c:1d in network mk-functional-767593
I0717 21:53:07.928602   30560 main.go:141] libmachine: (functional-767593) Calling .GetSSHPort
I0717 21:53:07.928797   30560 main.go:141] libmachine: (functional-767593) Calling .GetSSHKeyPath
I0717 21:53:07.928972   30560 main.go:141] libmachine: (functional-767593) Calling .GetSSHUsername
I0717 21:53:07.929113   30560 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/functional-767593/id_rsa Username:docker}
I0717 21:53:08.025911   30560 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 21:53:08.075028   30560 main.go:141] libmachine: Making call to close driver server
I0717 21:53:08.075046   30560 main.go:141] libmachine: (functional-767593) Calling .Close
I0717 21:53:08.075391   30560 main.go:141] libmachine: (functional-767593) DBG | Closing plugin on server side
I0717 21:53:08.075401   30560 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:53:08.075418   30560 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:53:08.075435   30560 main.go:141] libmachine: Making call to close driver server
I0717 21:53:08.075448   30560 main.go:141] libmachine: (functional-767593) Calling .Close
I0717 21:53:08.075709   30560 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:53:08.075727   30560 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-767593 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"b9236aca51c0a1764d4a958a5c39e13c5d1cb7742d4950f179b973b6cb355cd6","repoDigests":["docker.io/library/e47bd7f9cecefa816dc9d6c9d1d3db600f89ede2ebe5fe58464bd84e90c22ebb-tmp@sha256:30719e80bf7e395b42a93ca240af0e9306049e833928118f59f77cc492352e67"],"repoTags":[],"size":"1466017"},{"id":"9c72f3672fef0b68ecf528a64905600f731dd3dc1c0848b505943787b52bbd93","repoDigests":["localhost/my-image@sha256:d851dfc190ab2199b3c983dcf27169ae3e872a
46ebd9126bbe67d4e8d039388d"],"repoTags":["localhost/my-image:functional-767593"],"size":"1468599"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0","repoDigests":["docker.io/library/mysql@sha256:03b6dcedf5a2754da00e119e2cc6094ed3c884ad36b67bb25fe67be4b4f9bdb1","docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde"],"repoTags":["docker.io
/library/mysql:5.7"],"size":"588268197"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-767593"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":[
"gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":["registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f","registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"72713623"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"b0b
1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958
210e567079a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb","registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"122065872"},{"id":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e","registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"113919286"},{"id":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082","registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":[
"registry.k8s.io/kube-scheduler:v1.27.3"],"size":"59811126"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"7626bc3417b16827ec4a77459b75ef1fa83860a2e9a64150246f8482d24c5912","repoDigests":["localhost/minikube-local-cache-test@sha256:2a2a8fdf61f88b1dcf81ea636f0cde8256ce03d1f85ab349977649b34842765d"],"repoTags":["localhost/minikube-local-cache-test:functiona
l-767593"],"size":"3345"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-767593 image ls --format json --alsologtostderr:
I0717 21:53:07.660540   30526 out.go:296] Setting OutFile to fd 1 ...
I0717 21:53:07.660700   30526 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:53:07.660711   30526 out.go:309] Setting ErrFile to fd 2...
I0717 21:53:07.660717   30526 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:53:07.660927   30526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
I0717 21:53:07.661473   30526 config.go:182] Loaded profile config "functional-767593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:53:07.661609   30526 config.go:182] Loaded profile config "functional-767593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:53:07.661974   30526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 21:53:07.662034   30526 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:53:07.676663   30526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
I0717 21:53:07.677180   30526 main.go:141] libmachine: () Calling .GetVersion
I0717 21:53:07.677847   30526 main.go:141] libmachine: Using API Version  1
I0717 21:53:07.677882   30526 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:53:07.678212   30526 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:53:07.678443   30526 main.go:141] libmachine: (functional-767593) Calling .GetState
I0717 21:53:07.680099   30526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 21:53:07.680136   30526 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:53:07.694524   30526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41773
I0717 21:53:07.694975   30526 main.go:141] libmachine: () Calling .GetVersion
I0717 21:53:07.695529   30526 main.go:141] libmachine: Using API Version  1
I0717 21:53:07.695554   30526 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:53:07.695935   30526 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:53:07.696161   30526 main.go:141] libmachine: (functional-767593) Calling .DriverName
I0717 21:53:07.697894   30526 ssh_runner.go:195] Run: systemctl --version
I0717 21:53:07.697941   30526 main.go:141] libmachine: (functional-767593) Calling .GetSSHHostname
I0717 21:53:07.700802   30526 main.go:141] libmachine: (functional-767593) DBG | domain functional-767593 has defined MAC address 52:54:00:27:4c:1d in network mk-functional-767593
I0717 21:53:07.701123   30526 main.go:141] libmachine: (functional-767593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:1d", ip: ""} in network mk-functional-767593: {Iface:virbr1 ExpiryTime:2023-07-17 22:50:00 +0000 UTC Type:0 Mac:52:54:00:27:4c:1d Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:functional-767593 Clientid:01:52:54:00:27:4c:1d}
I0717 21:53:07.701161   30526 main.go:141] libmachine: (functional-767593) DBG | domain functional-767593 has defined IP address 192.168.39.120 and MAC address 52:54:00:27:4c:1d in network mk-functional-767593
I0717 21:53:07.701297   30526 main.go:141] libmachine: (functional-767593) Calling .GetSSHPort
I0717 21:53:07.701474   30526 main.go:141] libmachine: (functional-767593) Calling .GetSSHKeyPath
I0717 21:53:07.701656   30526 main.go:141] libmachine: (functional-767593) Calling .GetSSHUsername
I0717 21:53:07.701819   30526 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/functional-767593/id_rsa Username:docker}
I0717 21:53:07.787757   30526 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 21:53:07.830750   30526 main.go:141] libmachine: Making call to close driver server
I0717 21:53:07.830761   30526 main.go:141] libmachine: (functional-767593) Calling .Close
I0717 21:53:07.831036   30526 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:53:07.831054   30526 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:53:07.831069   30526 main.go:141] libmachine: Making call to close driver server
I0717 21:53:07.831077   30526 main.go:141] libmachine: (functional-767593) Calling .Close
I0717 21:53:07.831332   30526 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:53:07.831354   30526 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:53:07.831338   30526 main.go:141] libmachine: (functional-767593) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-767593 image ls --format yaml --alsologtostderr:
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "122065872"
- id: 7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
- registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "113919286"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0
repoDigests:
- docker.io/library/mysql@sha256:03b6dcedf5a2754da00e119e2cc6094ed3c884ad36b67bb25fe67be4b4f9bdb1
- docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde
repoTags:
- docker.io/library/mysql:5.7
size: "588268197"
- id: 7626bc3417b16827ec4a77459b75ef1fa83860a2e9a64150246f8482d24c5912
repoDigests:
- localhost/minikube-local-cache-test@sha256:2a2a8fdf61f88b1dcf81ea636f0cde8256ce03d1f85ab349977649b34842765d
repoTags:
- localhost/minikube-local-cache-test:functional-767593
size: "3345"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-767593
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests:
- registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "72713623"
- id: 41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "59811126"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-767593 image ls --format yaml --alsologtostderr:
I0717 21:53:04.692020   30375 out.go:296] Setting OutFile to fd 1 ...
I0717 21:53:04.692156   30375 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:53:04.692168   30375 out.go:309] Setting ErrFile to fd 2...
I0717 21:53:04.692174   30375 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:53:04.692477   30375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
I0717 21:53:04.693259   30375 config.go:182] Loaded profile config "functional-767593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:53:04.693407   30375 config.go:182] Loaded profile config "functional-767593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:53:04.693920   30375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 21:53:04.693992   30375 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:53:04.708562   30375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
I0717 21:53:04.709047   30375 main.go:141] libmachine: () Calling .GetVersion
I0717 21:53:04.709662   30375 main.go:141] libmachine: Using API Version  1
I0717 21:53:04.709681   30375 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:53:04.710081   30375 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:53:04.710261   30375 main.go:141] libmachine: (functional-767593) Calling .GetState
I0717 21:53:04.712343   30375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 21:53:04.712396   30375 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:53:04.726672   30375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42487
I0717 21:53:04.727096   30375 main.go:141] libmachine: () Calling .GetVersion
I0717 21:53:04.727595   30375 main.go:141] libmachine: Using API Version  1
I0717 21:53:04.727627   30375 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:53:04.727960   30375 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:53:04.728174   30375 main.go:141] libmachine: (functional-767593) Calling .DriverName
I0717 21:53:04.728377   30375 ssh_runner.go:195] Run: systemctl --version
I0717 21:53:04.728409   30375 main.go:141] libmachine: (functional-767593) Calling .GetSSHHostname
I0717 21:53:04.730971   30375 main.go:141] libmachine: (functional-767593) DBG | domain functional-767593 has defined MAC address 52:54:00:27:4c:1d in network mk-functional-767593
I0717 21:53:04.731329   30375 main.go:141] libmachine: (functional-767593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:1d", ip: ""} in network mk-functional-767593: {Iface:virbr1 ExpiryTime:2023-07-17 22:50:00 +0000 UTC Type:0 Mac:52:54:00:27:4c:1d Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:functional-767593 Clientid:01:52:54:00:27:4c:1d}
I0717 21:53:04.731357   30375 main.go:141] libmachine: (functional-767593) DBG | domain functional-767593 has defined IP address 192.168.39.120 and MAC address 52:54:00:27:4c:1d in network mk-functional-767593
I0717 21:53:04.731495   30375 main.go:141] libmachine: (functional-767593) Calling .GetSSHPort
I0717 21:53:04.731645   30375 main.go:141] libmachine: (functional-767593) Calling .GetSSHKeyPath
I0717 21:53:04.731856   30375 main.go:141] libmachine: (functional-767593) Calling .GetSSHUsername
I0717 21:53:04.732002   30375 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/functional-767593/id_rsa Username:docker}
I0717 21:53:04.840502   30375 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 21:53:04.893410   30375 main.go:141] libmachine: Making call to close driver server
I0717 21:53:04.893427   30375 main.go:141] libmachine: (functional-767593) Calling .Close
I0717 21:53:04.893739   30375 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:53:04.893774   30375 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:53:04.893801   30375 main.go:141] libmachine: Making call to close driver server
I0717 21:53:04.893779   30375 main.go:141] libmachine: (functional-767593) DBG | Closing plugin on server side
I0717 21:53:04.893820   30375 main.go:141] libmachine: (functional-767593) Calling .Close
I0717 21:53:04.894069   30375 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:53:04.894094   30375 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:53:04.894175   30375 main.go:141] libmachine: (functional-767593) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767593 ssh pgrep buildkitd: exit status 1 (185.384257ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image build -t localhost/my-image:functional-767593 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-767593 image build -t localhost/my-image:functional-767593 testdata/build --alsologtostderr: (2.309744991s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-767593 image build -t localhost/my-image:functional-767593 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b9236aca51c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-767593
--> 9c72f3672fe
Successfully tagged localhost/my-image:functional-767593
9c72f3672fef0b68ecf528a64905600f731dd3dc1c0848b505943787b52bbd93
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-767593 image build -t localhost/my-image:functional-767593 testdata/build --alsologtostderr:
I0717 21:53:05.122157   30429 out.go:296] Setting OutFile to fd 1 ...
I0717 21:53:05.122327   30429 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:53:05.122338   30429 out.go:309] Setting ErrFile to fd 2...
I0717 21:53:05.122345   30429 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:53:05.122572   30429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
I0717 21:53:05.123139   30429 config.go:182] Loaded profile config "functional-767593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:53:05.123693   30429 config.go:182] Loaded profile config "functional-767593": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:53:05.124050   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 21:53:05.124081   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:53:05.139816   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40873
I0717 21:53:05.140216   30429 main.go:141] libmachine: () Calling .GetVersion
I0717 21:53:05.140770   30429 main.go:141] libmachine: Using API Version  1
I0717 21:53:05.140791   30429 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:53:05.141148   30429 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:53:05.141348   30429 main.go:141] libmachine: (functional-767593) Calling .GetState
I0717 21:53:05.143340   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 21:53:05.143382   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 21:53:05.157676   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44407
I0717 21:53:05.158108   30429 main.go:141] libmachine: () Calling .GetVersion
I0717 21:53:05.158543   30429 main.go:141] libmachine: Using API Version  1
I0717 21:53:05.158567   30429 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 21:53:05.158914   30429 main.go:141] libmachine: () Calling .GetMachineName
I0717 21:53:05.159070   30429 main.go:141] libmachine: (functional-767593) Calling .DriverName
I0717 21:53:05.159274   30429 ssh_runner.go:195] Run: systemctl --version
I0717 21:53:05.159309   30429 main.go:141] libmachine: (functional-767593) Calling .GetSSHHostname
I0717 21:53:05.161926   30429 main.go:141] libmachine: (functional-767593) DBG | domain functional-767593 has defined MAC address 52:54:00:27:4c:1d in network mk-functional-767593
I0717 21:53:05.162282   30429 main.go:141] libmachine: (functional-767593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4c:1d", ip: ""} in network mk-functional-767593: {Iface:virbr1 ExpiryTime:2023-07-17 22:50:00 +0000 UTC Type:0 Mac:52:54:00:27:4c:1d Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:functional-767593 Clientid:01:52:54:00:27:4c:1d}
I0717 21:53:05.162321   30429 main.go:141] libmachine: (functional-767593) DBG | domain functional-767593 has defined IP address 192.168.39.120 and MAC address 52:54:00:27:4c:1d in network mk-functional-767593
I0717 21:53:05.162503   30429 main.go:141] libmachine: (functional-767593) Calling .GetSSHPort
I0717 21:53:05.162668   30429 main.go:141] libmachine: (functional-767593) Calling .GetSSHKeyPath
I0717 21:53:05.162801   30429 main.go:141] libmachine: (functional-767593) Calling .GetSSHUsername
I0717 21:53:05.162943   30429 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/functional-767593/id_rsa Username:docker}
I0717 21:53:05.257866   30429 build_images.go:151] Building image from path: /tmp/build.2344826483.tar
I0717 21:53:05.257931   30429 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 21:53:05.303782   30429 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2344826483.tar
I0717 21:53:05.320994   30429 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2344826483.tar: stat -c "%s %y" /var/lib/minikube/build/build.2344826483.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2344826483.tar': No such file or directory
I0717 21:53:05.321049   30429 ssh_runner.go:362] scp /tmp/build.2344826483.tar --> /var/lib/minikube/build/build.2344826483.tar (3072 bytes)
I0717 21:53:05.358947   30429 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2344826483
I0717 21:53:05.375714   30429 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2344826483 -xf /var/lib/minikube/build/build.2344826483.tar
I0717 21:53:05.399328   30429 crio.go:297] Building image: /var/lib/minikube/build/build.2344826483
I0717 21:53:05.399381   30429 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-767593 /var/lib/minikube/build/build.2344826483 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0717 21:53:07.367221   30429 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-767593 /var/lib/minikube/build/build.2344826483 --cgroup-manager=cgroupfs: (1.96781107s)
I0717 21:53:07.367292   30429 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2344826483
I0717 21:53:07.378827   30429 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2344826483.tar
I0717 21:53:07.389431   30429 build_images.go:207] Built localhost/my-image:functional-767593 from /tmp/build.2344826483.tar
I0717 21:53:07.389466   30429 build_images.go:123] succeeded building to: functional-767593
I0717 21:53:07.389470   30429 build_images.go:124] failed building to: 
I0717 21:53:07.389494   30429 main.go:141] libmachine: Making call to close driver server
I0717 21:53:07.389508   30429 main.go:141] libmachine: (functional-767593) Calling .Close
I0717 21:53:07.389823   30429 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:53:07.389846   30429 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:53:07.389855   30429 main.go:141] libmachine: Making call to close driver server
I0717 21:53:07.389864   30429 main.go:141] libmachine: (functional-767593) Calling .Close
I0717 21:53:07.390076   30429 main.go:141] libmachine: Successfully made call to close driver server
I0717 21:53:07.390092   30429 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 21:53:07.390112   30429 main.go:141] libmachine: (functional-767593) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-767593
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image load --daemon gcr.io/google-containers/addon-resizer:functional-767593 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-767593 image load --daemon gcr.io/google-containers/addon-resizer:functional-767593 --alsologtostderr: (5.347867269s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "297.044915ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "42.803284ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "273.665135ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "53.158153ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (24.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-767593 /tmp/TestFunctionalparallelMountCmdany-port1321129029/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689630751835596881" to /tmp/TestFunctionalparallelMountCmdany-port1321129029/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689630751835596881" to /tmp/TestFunctionalparallelMountCmdany-port1321129029/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689630751835596881" to /tmp/TestFunctionalparallelMountCmdany-port1321129029/001/test-1689630751835596881
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767593 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (255.036089ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 21:52 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 21:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 21:52 test-1689630751835596881
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh cat /mount-9p/test-1689630751835596881
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-767593 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2b220d4d-ae9d-4981-a883-5ba22b257765] Pending
helpers_test.go:344: "busybox-mount" [2b220d4d-ae9d-4981-a883-5ba22b257765] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2b220d4d-ae9d-4981-a883-5ba22b257765] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2b220d4d-ae9d-4981-a883-5ba22b257765] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 22.01830837s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-767593 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-767593 /tmp/TestFunctionalparallelMountCmdany-port1321129029/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (24.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image load --daemon gcr.io/google-containers/addon-resizer:functional-767593 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-767593 image load --daemon gcr.io/google-containers/addon-resizer:functional-767593 --alsologtostderr: (2.509483749s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (14.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-767593
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image load --daemon gcr.io/google-containers/addon-resizer:functional-767593 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-767593 image load --daemon gcr.io/google-containers/addon-resizer:functional-767593 --alsologtostderr: (12.82690875s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (14.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 service list -o json
functional_test.go:1493: Took "343.437394ms" to run "out/minikube-linux-amd64 -p functional-767593 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.120:30646
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.120:30646
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image save gcr.io/google-containers/addon-resizer:functional-767593 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-767593 image save gcr.io/google-containers/addon-resizer:functional-767593 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.452074265s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image rm gcr.io/google-containers/addon-resizer:functional-767593 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-767593 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (3.386010311s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-767593 /tmp/TestFunctionalparallelMountCmdspecific-port737292678/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767593 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (309.703174ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-767593 /tmp/TestFunctionalparallelMountCmdspecific-port737292678/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-767593 /tmp/TestFunctionalparallelMountCmdspecific-port737292678/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-767593 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1267633165/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-767593 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1267633165/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-767593 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1267633165/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-767593 ssh "findmnt -T" /mount1: exit status 1 (362.377708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-767593 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-767593 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1267633165/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-767593 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1267633165/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-767593 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1267633165/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-767593
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-767593 image save --daemon gcr.io/google-containers/addon-resizer:functional-767593 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-767593 image save --daemon gcr.io/google-containers/addon-resizer:functional-767593 --alsologtostderr: (3.334838336s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-767593
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.37s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-767593
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-767593
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-767593
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (108.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-480151 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0717 21:53:32.374279   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:53:52.855221   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:54:33.815924   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-480151 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m48.26134517s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (108.26s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-480151 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-480151 addons enable ingress --alsologtostderr -v=5: (13.527451848s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-480151 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                    
x
+
TestJSONOutput/start/Command (99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-292193 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0717 21:58:39.578165   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 21:58:50.024467   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-292193 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.004164671s)
--- PASS: TestJSONOutput/start/Command (99.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-292193 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-292193 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-292193 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-292193 --output=json --user=testUser: (7.084132938s)
--- PASS: TestJSONOutput/stop/Command (7.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-813063 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-813063 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.569486ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b8447a8f-b7dc-47ea-82f7-df2b6f091e11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-813063] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5944988b-1ccc-4b0a-9af7-5ef5414b802f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16899"}}
	{"specversion":"1.0","id":"57744dc6-7af8-459b-96cd-5590e6001196","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"efad05bb-1e46-4788-8a7f-07ad4754df08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig"}}
	{"specversion":"1.0","id":"6f9ae2f3-fc56-478b-94dc-bf10a850dcd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube"}}
	{"specversion":"1.0","id":"df8454ab-2e10-47fe-94a0-a63d1762cc05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"26fc4992-ba4d-4545-a8fc-91d0c0efd31f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"91b7f26c-8bfc-4dfb-8b2b-4a9997535eb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-813063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-813063
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (98.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-907319 --driver=kvm2  --container-runtime=crio
E0717 22:00:11.945087   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 22:00:31.749434   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:00:31.754684   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:00:31.764969   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:00:31.785259   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:00:31.825566   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:00:31.906069   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:00:32.066454   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:00:32.387088   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:00:33.027995   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:00:34.308973   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:00:36.870070   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:00:41.990919   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:00:52.231684   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-907319 --driver=kvm2  --container-runtime=crio: (44.85080124s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-910395 --driver=kvm2  --container-runtime=crio
E0717 22:01:12.712372   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-910395 --driver=kvm2  --container-runtime=crio: (51.105583693s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-907319
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-910395
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-910395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-910395
helpers_test.go:175: Cleaning up "first-907319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-907319
--- PASS: TestMinikubeProfile (98.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-853079 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0717 22:01:53.673076   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-853079 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.387386206s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-853079 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-853079 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-876666 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0717 22:02:28.100857   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-876666 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.286164125s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-876666 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-876666 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-853079 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-876666 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-876666 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.08s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-876666
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-876666: (1.076780336s)
--- PASS: TestMountStart/serial/Stop (1.08s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.39s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-876666
E0717 22:02:55.785646   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-876666: (20.391984714s)
--- PASS: TestMountStart/serial/RestartStopped (21.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-876666 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-876666 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-009530 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 22:03:11.892796   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 22:03:15.594176   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-009530 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m44.82035703s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-009530 -- rollout status deployment/busybox: (2.484204407s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- exec busybox-67b7f59bb-58859 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- exec busybox-67b7f59bb-p72ln -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- exec busybox-67b7f59bb-58859 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- exec busybox-67b7f59bb-p72ln -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- exec busybox-67b7f59bb-58859 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-009530 -- exec busybox-67b7f59bb-p72ln -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.16s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-009530 -v 3 --alsologtostderr
E0717 22:05:31.747360   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-009530 -v 3 --alsologtostderr: (41.860391203s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.44s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 cp testdata/cp-test.txt multinode-009530:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 cp multinode-009530:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile170704396/001/cp-test_multinode-009530.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 cp multinode-009530:/home/docker/cp-test.txt multinode-009530-m02:/home/docker/cp-test_multinode-009530_multinode-009530-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530-m02 "sudo cat /home/docker/cp-test_multinode-009530_multinode-009530-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 cp multinode-009530:/home/docker/cp-test.txt multinode-009530-m03:/home/docker/cp-test_multinode-009530_multinode-009530-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530-m03 "sudo cat /home/docker/cp-test_multinode-009530_multinode-009530-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 cp testdata/cp-test.txt multinode-009530-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 cp multinode-009530-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile170704396/001/cp-test_multinode-009530-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 cp multinode-009530-m02:/home/docker/cp-test.txt multinode-009530:/home/docker/cp-test_multinode-009530-m02_multinode-009530.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530 "sudo cat /home/docker/cp-test_multinode-009530-m02_multinode-009530.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 cp multinode-009530-m02:/home/docker/cp-test.txt multinode-009530-m03:/home/docker/cp-test_multinode-009530-m02_multinode-009530-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530-m03 "sudo cat /home/docker/cp-test_multinode-009530-m02_multinode-009530-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 cp testdata/cp-test.txt multinode-009530-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 cp multinode-009530-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile170704396/001/cp-test_multinode-009530-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 cp multinode-009530-m03:/home/docker/cp-test.txt multinode-009530:/home/docker/cp-test_multinode-009530-m03_multinode-009530.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530 "sudo cat /home/docker/cp-test_multinode-009530-m03_multinode-009530.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 cp multinode-009530-m03:/home/docker/cp-test.txt multinode-009530-m02:/home/docker/cp-test_multinode-009530-m03_multinode-009530-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 ssh -n multinode-009530-m02 "sudo cat /home/docker/cp-test_multinode-009530-m03_multinode-009530-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-009530 node stop m03: (2.07719586s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-009530 status: exit status 7 (423.879148ms)

                                                
                                                
-- stdout --
	multinode-009530
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-009530-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-009530-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-009530 status --alsologtostderr: exit status 7 (424.167372ms)

                                                
                                                
-- stdout --
	multinode-009530
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-009530-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-009530-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:05:52.666853   37233 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:05:52.666980   37233 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:05:52.666991   37233 out.go:309] Setting ErrFile to fd 2...
	I0717 22:05:52.666998   37233 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:05:52.667210   37233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:05:52.667398   37233 out.go:303] Setting JSON to false
	I0717 22:05:52.667427   37233 mustload.go:65] Loading cluster: multinode-009530
	I0717 22:05:52.667534   37233 notify.go:220] Checking for updates...
	I0717 22:05:52.668150   37233 config.go:182] Loaded profile config "multinode-009530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:05:52.668260   37233 status.go:255] checking status of multinode-009530 ...
	I0717 22:05:52.669215   37233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:05:52.669431   37233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:05:52.685027   37233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0717 22:05:52.685445   37233 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:05:52.686062   37233 main.go:141] libmachine: Using API Version  1
	I0717 22:05:52.686101   37233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:05:52.686483   37233 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:05:52.686703   37233 main.go:141] libmachine: (multinode-009530) Calling .GetState
	I0717 22:05:52.688449   37233 status.go:330] multinode-009530 host status = "Running" (err=<nil>)
	I0717 22:05:52.688475   37233 host.go:66] Checking if "multinode-009530" exists ...
	I0717 22:05:52.688846   37233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:05:52.688888   37233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:05:52.704841   37233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35819
	I0717 22:05:52.705281   37233 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:05:52.705776   37233 main.go:141] libmachine: Using API Version  1
	I0717 22:05:52.705797   37233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:05:52.706129   37233 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:05:52.706319   37233 main.go:141] libmachine: (multinode-009530) Calling .GetIP
	I0717 22:05:52.709127   37233 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:05:52.709561   37233 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:05:52.709596   37233 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:05:52.709717   37233 host.go:66] Checking if "multinode-009530" exists ...
	I0717 22:05:52.710100   37233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:05:52.710134   37233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:05:52.724582   37233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34303
	I0717 22:05:52.724943   37233 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:05:52.725364   37233 main.go:141] libmachine: Using API Version  1
	I0717 22:05:52.725385   37233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:05:52.725694   37233 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:05:52.725867   37233 main.go:141] libmachine: (multinode-009530) Calling .DriverName
	I0717 22:05:52.726033   37233 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:05:52.726056   37233 main.go:141] libmachine: (multinode-009530) Calling .GetSSHHostname
	I0717 22:05:52.728841   37233 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:05:52.729166   37233 main.go:141] libmachine: (multinode-009530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:61:2c", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:03:22 +0000 UTC Type:0 Mac:52:54:00:64:61:2c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-009530 Clientid:01:52:54:00:64:61:2c}
	I0717 22:05:52.729193   37233 main.go:141] libmachine: (multinode-009530) DBG | domain multinode-009530 has defined IP address 192.168.39.222 and MAC address 52:54:00:64:61:2c in network mk-multinode-009530
	I0717 22:05:52.729302   37233 main.go:141] libmachine: (multinode-009530) Calling .GetSSHPort
	I0717 22:05:52.729460   37233 main.go:141] libmachine: (multinode-009530) Calling .GetSSHKeyPath
	I0717 22:05:52.729614   37233 main.go:141] libmachine: (multinode-009530) Calling .GetSSHUsername
	I0717 22:05:52.729756   37233 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530/id_rsa Username:docker}
	I0717 22:05:52.819174   37233 ssh_runner.go:195] Run: systemctl --version
	I0717 22:05:52.825017   37233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:05:52.837595   37233 kubeconfig.go:92] found "multinode-009530" server: "https://192.168.39.222:8443"
	I0717 22:05:52.837620   37233 api_server.go:166] Checking apiserver status ...
	I0717 22:05:52.837663   37233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:05:52.849334   37233 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1068/cgroup
	I0717 22:05:52.857897   37233 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod49e7615bd1aa66d6e32161e120c48180/crio-2a725fb53f4f6f9bc5225db4e85a2e8b5e77715c19d0aa83420f986b7e4279c5"
	I0717 22:05:52.857978   37233 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod49e7615bd1aa66d6e32161e120c48180/crio-2a725fb53f4f6f9bc5225db4e85a2e8b5e77715c19d0aa83420f986b7e4279c5/freezer.state
	I0717 22:05:52.866907   37233 api_server.go:204] freezer state: "THAWED"
	I0717 22:05:52.866934   37233 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I0717 22:05:52.871754   37233 api_server.go:279] https://192.168.39.222:8443/healthz returned 200:
	ok
	I0717 22:05:52.871779   37233 status.go:421] multinode-009530 apiserver status = Running (err=<nil>)
	I0717 22:05:52.871798   37233 status.go:257] multinode-009530 status: &{Name:multinode-009530 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 22:05:52.871829   37233 status.go:255] checking status of multinode-009530-m02 ...
	I0717 22:05:52.872129   37233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:05:52.872178   37233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:05:52.886865   37233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36017
	I0717 22:05:52.887293   37233 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:05:52.887744   37233 main.go:141] libmachine: Using API Version  1
	I0717 22:05:52.887763   37233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:05:52.888063   37233 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:05:52.888255   37233 main.go:141] libmachine: (multinode-009530-m02) Calling .GetState
	I0717 22:05:52.889816   37233 status.go:330] multinode-009530-m02 host status = "Running" (err=<nil>)
	I0717 22:05:52.889838   37233 host.go:66] Checking if "multinode-009530-m02" exists ...
	I0717 22:05:52.890257   37233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:05:52.890300   37233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:05:52.904965   37233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I0717 22:05:52.905392   37233 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:05:52.905909   37233 main.go:141] libmachine: Using API Version  1
	I0717 22:05:52.905925   37233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:05:52.906229   37233 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:05:52.906459   37233 main.go:141] libmachine: (multinode-009530-m02) Calling .GetIP
	I0717 22:05:52.909433   37233 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:05:52.909945   37233 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:05:52.909972   37233 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:05:52.910156   37233 host.go:66] Checking if "multinode-009530-m02" exists ...
	I0717 22:05:52.910446   37233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:05:52.910482   37233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:05:52.925241   37233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0717 22:05:52.925698   37233 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:05:52.926195   37233 main.go:141] libmachine: Using API Version  1
	I0717 22:05:52.926214   37233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:05:52.926535   37233 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:05:52.926700   37233 main.go:141] libmachine: (multinode-009530-m02) Calling .DriverName
	I0717 22:05:52.926920   37233 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:05:52.926942   37233 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHHostname
	I0717 22:05:52.929656   37233 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:05:52.930095   37233 main.go:141] libmachine: (multinode-009530-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:ac:62", ip: ""} in network mk-multinode-009530: {Iface:virbr1 ExpiryTime:2023-07-17 23:04:29 +0000 UTC Type:0 Mac:52:54:00:2a:ac:62 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-009530-m02 Clientid:01:52:54:00:2a:ac:62}
	I0717 22:05:52.930126   37233 main.go:141] libmachine: (multinode-009530-m02) DBG | domain multinode-009530-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:2a:ac:62 in network mk-multinode-009530
	I0717 22:05:52.930300   37233 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHPort
	I0717 22:05:52.930565   37233 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHKeyPath
	I0717 22:05:52.930712   37233 main.go:141] libmachine: (multinode-009530-m02) Calling .GetSSHUsername
	I0717 22:05:52.930853   37233 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16899-15759/.minikube/machines/multinode-009530-m02/id_rsa Username:docker}
	I0717 22:05:53.017948   37233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:05:53.031735   37233 status.go:257] multinode-009530-m02 status: &{Name:multinode-009530-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 22:05:53.031778   37233 status.go:255] checking status of multinode-009530-m03 ...
	I0717 22:05:53.032216   37233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 22:05:53.032262   37233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 22:05:53.047596   37233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0717 22:05:53.047963   37233 main.go:141] libmachine: () Calling .GetVersion
	I0717 22:05:53.048487   37233 main.go:141] libmachine: Using API Version  1
	I0717 22:05:53.048508   37233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 22:05:53.048855   37233 main.go:141] libmachine: () Calling .GetMachineName
	I0717 22:05:53.049065   37233 main.go:141] libmachine: (multinode-009530-m03) Calling .GetState
	I0717 22:05:53.050577   37233 status.go:330] multinode-009530-m03 host status = "Stopped" (err=<nil>)
	I0717 22:05:53.050592   37233 status.go:343] host is not running, skipping remaining checks
	I0717 22:05:53.050599   37233 status.go:257] multinode-009530-m03 status: &{Name:multinode-009530-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.93s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 node start m03 --alsologtostderr
E0717 22:05:59.435174   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-009530 node start m03 --alsologtostderr: (27.270177068s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-009530 node delete m03: (1.053739246s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (443.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-009530 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 22:20:31.747302   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:22:28.101653   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 22:23:11.892298   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 22:25:31.749646   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:26:14.940829   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 22:27:28.101363   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-009530 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m23.449299673s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-009530 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (443.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-009530
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-009530-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-009530-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.403066ms)

                                                
                                                
-- stdout --
	* [multinode-009530-m02] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-009530-m02' is duplicated with machine name 'multinode-009530-m02' in profile 'multinode-009530'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-009530-m03 --driver=kvm2  --container-runtime=crio
E0717 22:28:11.892862   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-009530-m03 --driver=kvm2  --container-runtime=crio: (46.552879674s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-009530
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-009530: exit status 80 (208.43089ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-009530
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-009530-m03 already exists in multinode-009530-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-009530-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.75s)

                                                
                                    
x
+
TestScheduledStopUnix (118.82s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-387928 --memory=2048 --driver=kvm2  --container-runtime=crio
E0717 22:33:11.892751   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 22:33:34.796642   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-387928 --memory=2048 --driver=kvm2  --container-runtime=crio: (47.282978016s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-387928 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-387928 -n scheduled-stop-387928
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-387928 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-387928 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-387928 -n scheduled-stop-387928
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-387928
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-387928 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-387928
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-387928: exit status 7 (57.017172ms)

                                                
                                                
-- stdout --
	scheduled-stop-387928
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-387928 -n scheduled-stop-387928
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-387928 -n scheduled-stop-387928: exit status 7 (58.622726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-387928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-387928
--- PASS: TestScheduledStopUnix (118.82s)

                                                
                                    
x
+
TestKubernetesUpgrade (137.79s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-719218 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-719218 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.945875033s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-719218
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-719218: (3.136231132s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-719218 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-719218 status --format={{.Host}}: exit status 7 (82.560884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-719218 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-719218 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.273640572s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-719218 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-719218 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-719218 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (103.520213ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-719218] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-719218
	    minikube start -p kubernetes-upgrade-719218 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7192182 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-719218 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-719218 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-719218 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (18.07643378s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-719218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-719218
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-719218: (1.094622927s)
--- PASS: TestKubernetesUpgrade (137.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-987609 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-987609 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (129.476331ms)

                                                
                                                
-- stdout --
	* [false-987609] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:35:01.020223   45629 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:35:01.020388   45629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:35:01.020397   45629 out.go:309] Setting ErrFile to fd 2...
	I0717 22:35:01.020403   45629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:35:01.020832   45629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-15759/.minikube/bin
	I0717 22:35:01.021800   45629 out.go:303] Setting JSON to false
	I0717 22:35:01.022873   45629 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8253,"bootTime":1689625048,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:35:01.022964   45629 start.go:138] virtualization: kvm guest
	I0717 22:35:01.025674   45629 out.go:177] * [false-987609] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:35:01.027775   45629 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:35:01.027713   45629 notify.go:220] Checking for updates...
	I0717 22:35:01.029408   45629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:35:01.031093   45629 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	I0717 22:35:01.032841   45629 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	I0717 22:35:01.034314   45629 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:35:01.035767   45629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:35:01.037902   45629 config.go:182] Loaded profile config "kubernetes-upgrade-719218": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0717 22:35:01.038109   45629 config.go:182] Loaded profile config "offline-crio-696820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:35:01.038223   45629 config.go:182] Loaded profile config "stopped-upgrade-132802": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0717 22:35:01.038422   45629 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:35:01.090066   45629 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 22:35:01.091630   45629 start.go:298] selected driver: kvm2
	I0717 22:35:01.091648   45629 start.go:880] validating driver "kvm2" against <nil>
	I0717 22:35:01.091661   45629 start.go:891] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:35:01.094208   45629 out.go:177] 
	W0717 22:35:01.095858   45629 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0717 22:35:01.097368   45629 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-987609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-987609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-987609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-987609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-987609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-987609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-987609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-987609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-987609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-987609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-987609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-987609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-987609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-987609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-987609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-987609"

                                                
                                                
----------------------- debugLogs end: false-987609 [took: 2.732769004s] --------------------------------
helpers_test.go:175: Cleaning up "false-987609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-987609
--- PASS: TestNetworkPlugins/group/false (3.00s)

                                                
                                    
x
+
TestPause/serial/Start (88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-482945 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-482945 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m28.002351855s)
--- PASS: TestPause/serial/Start (88.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-431736 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-431736 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (76.380116ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-431736] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-15759/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-15759/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (89.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-431736 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-431736 --driver=kvm2  --container-runtime=crio: (1m29.183259464s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-431736 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (89.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-132802
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (165.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-332820 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0717 22:40:31.749495   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-332820 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m45.423921202s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (165.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (14.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-431736 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-431736 --no-kubernetes --driver=kvm2  --container-runtime=crio: (12.979626099s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-431736 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-431736 status -o json: exit status 2 (323.668308ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-431736","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-431736
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (14.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (25.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-431736 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-431736 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.942452566s)
--- PASS: TestNoKubernetes/serial/Start (25.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-431736 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-431736 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.853097ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.39327059s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.260622489s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-431736
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-431736: (1.093058533s)
--- PASS: TestNoKubernetes/serial/Stop (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-431736 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-431736 --driver=kvm2  --container-runtime=crio: (21.316731796s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (119.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-571296 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-571296 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (1m59.380386085s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (119.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (156.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-935524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-935524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (2m36.747841728s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (156.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-431736 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-431736 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.862734ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (150.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-504828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 22:42:28.101707   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-504828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (2m30.602674428s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (150.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-332820 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [adb7850f-dfc0-4873-ab0c-f2e5fe6bdc56] Pending
helpers_test.go:344: "busybox" [adb7850f-dfc0-4873-ab0c-f2e5fe6bdc56] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [adb7850f-dfc0-4873-ab0c-f2e5fe6bdc56] Running
E0717 22:42:54.941214   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.055403805s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-332820 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-332820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-332820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.936136153s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-332820 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-571296 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [34f91931-6103-40a5-b7e2-9f17c68e9871] Pending
helpers_test.go:344: "busybox" [34f91931-6103-40a5-b7e2-9f17c68e9871] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [34f91931-6103-40a5-b7e2-9f17c68e9871] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.028015776s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-571296 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-571296 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-571296 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.187957282s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-571296 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-935524 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dcf23863-eb23-4dfc-91c8-866a27d56aa7] Pending
helpers_test.go:344: "busybox" [dcf23863-eb23-4dfc-91c8-866a27d56aa7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dcf23863-eb23-4dfc-91c8-866a27d56aa7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.026701203s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-935524 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-504828 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c717a981-7cdf-49aa-8028-699bd7bc25f0] Pending
helpers_test.go:344: "busybox" [c717a981-7cdf-49aa-8028-699bd7bc25f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c717a981-7cdf-49aa-8028-699bd7bc25f0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.02028266s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-504828 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-935524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-935524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.11867996s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-935524 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-504828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-504828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.190047192s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-504828 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (807.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-332820 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-332820 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (13m27.379683308s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-332820 -n old-k8s-version-332820
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (807.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (813.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-571296 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-571296 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (13m32.777356076s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-571296 -n embed-certs-571296
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (813.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (510.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-935524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-935524 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (8m30.227896206s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-935524 -n no-preload-935524
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (510.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (834s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-504828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 22:48:11.892290   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 22:50:14.797766   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:50:31.747252   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
E0717 22:52:28.101398   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 22:53:11.892969   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 22:55:31.747154   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/ingress-addon-legacy-480151/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-504828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (13m53.728170369s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504828 -n default-k8s-diff-port-504828
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (834.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-670356 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-670356 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (1m2.055967815s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-670356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-670356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.741114159s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-670356 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-670356 --alsologtostderr -v=3: (10.27609063s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-670356 -n newest-cni-670356
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-670356 -n newest-cni-670356: exit status 7 (59.399351ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-670356 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (50.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-670356 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-670356 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.27.3: (50.050852993s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-670356 -n newest-cni-670356
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (50.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m41.033722859s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-670356 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-670356 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-670356 -n newest-cni-670356
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-670356 -n newest-cni-670356: exit status 2 (265.78747ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-670356 -n newest-cni-670356
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-670356 -n newest-cni-670356: exit status 2 (273.643806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-670356 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-670356 -n newest-cni-670356
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-670356 -n newest-cni-670356
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (90.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m30.680240606s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (90.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (144.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m24.482428325s)
--- PASS: TestNetworkPlugins/group/calico/Start (144.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (149.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0717 23:12:28.101193   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/functional-767593/client.crt: no such file or directory
E0717 23:12:49.553214   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:12:49.558513   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:12:49.568805   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:12:49.589129   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:12:49.629438   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:12:49.709833   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:12:49.870336   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:12:50.190975   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:12:50.832120   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:12:52.112462   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:12:54.673612   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:12:59.793848   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:13:10.034696   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
E0717 23:13:11.893160   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 23:13:30.514908   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/old-k8s-version-332820/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m29.555251712s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (149.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tv8sr" [078ebb14-246c-4551-95f2-cb855914a999] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.022686989s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-987609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-987609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-vp95d" [7277dcdc-f25c-44a2-962b-64c0ffaf4092] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-vp95d" [7277dcdc-f25c-44a2-962b-64c0ffaf4092] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.010430242s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-987609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-987609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-rc88v" [0ae4605e-fdeb-48bb-a87e-8b57ac53fdbf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-rc88v" [0ae4605e-fdeb-48bb-a87e-8b57ac53fdbf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.009037243s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-987609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-987609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (101.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m41.514739205s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (101.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (108.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m48.394894755s)
--- PASS: TestNetworkPlugins/group/flannel/Start (108.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-g5qcc" [4e19f7e2-8806-4b42-ad8e-afa5818cd2e1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.023393663s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-987609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-987609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-6rjtr" [b03721ca-28ac-4e67-892b-8c3d70949e33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 23:14:50.266068   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:14:50.271360   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:14:50.282180   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:14:50.302759   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:14:50.343288   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:14:50.423633   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:14:50.584681   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:14:50.905035   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:14:51.545594   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:14:52.826254   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:14:55.386672   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:14:55.460970   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
E0717 23:14:55.466239   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
E0717 23:14:55.476600   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
E0717 23:14:55.497427   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
E0717 23:14:55.537754   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
E0717 23:14:55.618293   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
E0717 23:14:55.778559   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-6rjtr" [b03721ca-28ac-4e67-892b-8c3d70949e33] Running
E0717 23:14:56.098668   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
E0717 23:14:56.739610   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.008530354s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-987609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-987609 replace --force -f testdata/netcat-deployment.yaml
E0717 23:14:58.020747   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-h2pmg" [190dbfbb-eaf2-4637-896d-1c0b7ddac922] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 23:15:00.507777   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
E0717 23:15:00.581545   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-h2pmg" [190dbfbb-eaf2-4637-896d-1c0b7ddac922] Running
E0717 23:15:05.702494   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.010667842s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-987609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-987609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0717 23:15:10.748688   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (102.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-987609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m42.955359334s)
--- PASS: TestNetworkPlugins/group/bridge/Start (102.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-987609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-987609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-stfcf" [6d620627-d9c2-41ad-b2dc-e548863bf614] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-stfcf" [6d620627-d9c2-41ad-b2dc-e548863bf614] Running
E0717 23:16:12.189686   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/no-preload-935524/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.008191156s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9wtd9" [883bd120-4b11-4134-af94-9524066ed962] Running
E0717 23:16:14.942710   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/addons-436248/client.crt: no such file or directory
E0717 23:16:17.383765   22990 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-15759/.minikube/profiles/default-k8s-diff-port-504828/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.021092248s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-987609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-987609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-987609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-q7v66" [15d7c98f-6320-4696-ba44-9a370d929103] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-q7v66" [15d7c98f-6320-4696-ba44-9a370d929103] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.044281746s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-987609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-987609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-987609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-75cgq" [6c610528-daa0-4ad4-818c-c3490d91b8b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-75cgq" [6c610528-daa0-4ad4-818c-c3490d91b8b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.007169079s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-987609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-987609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (36/288)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.27.3/cached-images 0
13 TestDownloadOnly/v1.27.3/binaries 0
14 TestDownloadOnly/v1.27.3/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
146 TestImageBuild 0
179 TestKicCustomNetwork 0
180 TestKicExistingNetwork 0
181 TestKicCustomSubnet 0
182 TestKicStaticIP 0
213 TestChangeNoneUser 0
216 TestScheduledStopWindows 0
218 TestSkaffold 0
220 TestInsufficientStorage 0
224 TestMissingContainerUpgrade 0
228 TestNetworkPlugins/group/kubenet 3.43
233 TestStartStop/group/disable-driver-mounts 0.15
244 TestNetworkPlugins/group/cilium 3.26
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-987609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-987609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-987609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-987609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-987609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-987609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-987609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-987609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-987609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-987609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-987609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-987609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-987609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-987609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-987609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-987609"

                                                
                                                
----------------------- debugLogs end: kubenet-987609 [took: 3.260393057s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-987609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-987609
--- SKIP: TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-615088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-615088
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-987609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-987609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-987609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-987609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-987609"

                                                
                                                
----------------------- debugLogs end: cilium-987609 [took: 3.094750772s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-987609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-987609
--- SKIP: TestNetworkPlugins/group/cilium (3.26s)

                                                
                                    
Copied to clipboard